repo_name
stringlengths
8
38
pr_number
int64
3
47.1k
pr_title
stringlengths
8
175
pr_description
stringlengths
2
19.8k
author
null
date_created
stringlengths
25
25
date_merged
stringlengths
25
25
filepath
stringlengths
6
136
before_content
stringlengths
54
884k
after_content
stringlengths
56
884k
pr_author
stringlengths
3
21
previous_commit
stringlengths
40
40
pr_commit
stringlengths
40
40
comment
stringlengths
2
25.4k
comment_author
stringlengths
3
29
__index_level_0__
int64
0
5.1k
moby/moby
42,763
Dockerfile: update syntax, switch to bullseye, add missing libseccomp-dev, remove build pack
So, this started with the intention to "just" update `buster` to `bullseye`, but finding various issues that needed fixing, or could be improved. ### Dockerfile: update to docker/dockerfile:1.3, and remove temporary fix. I saw we were using an older syntax, and the issue I reported (https://github.com/moby/buildkit/issues/2114) was fixed in dockerfile:1.3 front-end, so upgrading allowed me to remove the temporary fix. ### Dockerfile: remove aufs-tools, as it's not available on bullseye Well, title says all. No more aufs? ### Dockerfile: update to debian bullseye Well, that's what I came here for 😂 ### Dockerfile: add back libseccomp-dev to cross-compile runc Commit https://github.com/moby/moby/commit/7168d98c434af0a35e8c9a05dfb87bb40511a38a removed these, but I _think_ we overlooked that the same stage is used to build runc as well, so we likely need these. (but happy to remove if we really don't need them!) ### Dockerfile: frozen images: update to bullseye, remove buildpack-dep Update the frozen images to also be based on Debian bullseye. Using the "slim" variant (which looks to have all we're currently using), and remove the buildpack-dep frozen image. The buildpack-dep image is quite large, and it looks like we only use it to compile some C binaries, which should work fine on a regular debian image; docker build -t debian:bullseye-slim-gcc -<<EOF FROM debian:bullseye-slim RUN apt-get update && apt-get install -y gcc libc6-dev --no-install-recommends EOF docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE debian bullseye-slim-gcc 1851750242af About a minute ago 255MB buildpack-deps bullseye fe8fece98de2 2 days ago 834MB **- How to verify it** **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-08-19 22:01:49+00:00
2021-08-22 13:37:00+00:00
Dockerfile.e2e
ARG GO_VERSION=1.16.7 FROM golang:${GO_VERSION}-alpine AS base ENV GO111MODULE=off RUN apk --no-cache add \ bash \ btrfs-progs-dev \ build-base \ curl \ lvm2-dev \ jq RUN mkdir -p /build/ RUN mkdir -p /go/src/github.com/docker/docker/ WORKDIR /go/src/github.com/docker/docker/ FROM base AS frozen-images # Get useful and necessary Hub images so we can "docker load" locally instead of pulling COPY contrib/download-frozen-image-v2.sh / RUN /download-frozen-image-v2.sh /build \ buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \ busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \ busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \ debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \ hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 # See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list) FROM base AS dockercli ENV INSTALL_BINARY_NAME=dockercli COPY hack/dockerfile/install/install.sh ./install.sh COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./ RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME # Build DockerSuite.TestBuild* dependency FROM base AS contrib COPY contrib/syscall-test /build/syscall-test COPY contrib/httpserver/Dockerfile /build/httpserver/Dockerfile COPY contrib/httpserver contrib/httpserver RUN CGO_ENABLED=0 go build -buildmode=pie -o /build/httpserver/httpserver github.com/docker/docker/contrib/httpserver # Build the integration tests and copy the resulting binaries to /build/tests FROM base AS builder # Set tag and add sources COPY . . # Copy test sources tests that use assert can print errors RUN mkdir -p /build${PWD} && find integration integration-cli -name \*_test.go -exec cp --parents '{}' /build${PWD} \; # Build and install test binaries ARG DOCKER_GITCOMMIT=undefined RUN hack/make.sh build-integration-test-binary RUN mkdir -p /build/tests && find . -name test.main -exec cp --parents '{}' /build/tests \; ## Generate testing image FROM alpine:3.10 as runner ENV DOCKER_REMOTE_DAEMON=1 ENV DOCKER_INTEGRATION_DAEMON_DEST=/ ENTRYPOINT ["/scripts/run.sh"] # Add an unprivileged user to be used for tests which need it RUN addgroup docker && adduser -D -G docker unprivilegeduser -s /bin/ash # GNU tar is used for generating the emptyfs image RUN apk --no-cache add \ bash \ ca-certificates \ g++ \ git \ iptables \ pigz \ tar \ xz COPY hack/test/e2e-run.sh /scripts/run.sh COPY hack/make/.ensure-emptyfs /scripts/ensure-emptyfs.sh COPY integration/testdata /tests/integration/testdata COPY integration/build/testdata /tests/integration/build/testdata COPY integration-cli/fixtures /tests/integration-cli/fixtures COPY --from=frozen-images /build/ /docker-frozen-images COPY --from=dockercli /build/ /usr/bin/ COPY --from=contrib /build/ /tests/contrib/ COPY --from=builder /build/ /
ARG GO_VERSION=1.16.7 FROM golang:${GO_VERSION}-alpine AS base ENV GO111MODULE=off RUN apk --no-cache add \ bash \ btrfs-progs-dev \ build-base \ curl \ lvm2-dev \ jq RUN mkdir -p /build/ RUN mkdir -p /go/src/github.com/docker/docker/ WORKDIR /go/src/github.com/docker/docker/ FROM base AS frozen-images # Get useful and necessary Hub images so we can "docker load" locally instead of pulling COPY contrib/download-frozen-image-v2.sh / RUN /download-frozen-image-v2.sh /build \ busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \ busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \ debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \ hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \ arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 # See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list) FROM base AS dockercli ENV INSTALL_BINARY_NAME=dockercli COPY hack/dockerfile/install/install.sh ./install.sh COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./ RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME # Build DockerSuite.TestBuild* dependency FROM base AS contrib COPY contrib/syscall-test /build/syscall-test COPY contrib/httpserver/Dockerfile /build/httpserver/Dockerfile COPY contrib/httpserver contrib/httpserver RUN CGO_ENABLED=0 go build -buildmode=pie -o /build/httpserver/httpserver github.com/docker/docker/contrib/httpserver # Build the integration tests and copy the resulting binaries to /build/tests FROM base AS builder # Set tag and add sources COPY . . # Copy test sources tests that use assert can print errors RUN mkdir -p /build${PWD} && find integration integration-cli -name \*_test.go -exec cp --parents '{}' /build${PWD} \; # Build and install test binaries ARG DOCKER_GITCOMMIT=undefined RUN hack/make.sh build-integration-test-binary RUN mkdir -p /build/tests && find . -name test.main -exec cp --parents '{}' /build/tests \; ## Generate testing image FROM alpine:3.10 as runner ENV DOCKER_REMOTE_DAEMON=1 ENV DOCKER_INTEGRATION_DAEMON_DEST=/ ENTRYPOINT ["/scripts/run.sh"] # Add an unprivileged user to be used for tests which need it RUN addgroup docker && adduser -D -G docker unprivilegeduser -s /bin/ash # GNU tar is used for generating the emptyfs image RUN apk --no-cache add \ bash \ ca-certificates \ g++ \ git \ inetutils-ping \ iptables \ libcap2-bin \ pigz \ tar \ xz COPY hack/test/e2e-run.sh /scripts/run.sh COPY hack/make/.ensure-emptyfs /scripts/ensure-emptyfs.sh COPY integration/testdata /tests/integration/testdata COPY integration/build/testdata /tests/integration/build/testdata COPY integration-cli/fixtures /tests/integration-cli/fixtures COPY --from=frozen-images /build/ /docker-frozen-images COPY --from=dockercli /build/ /usr/bin/ COPY --from=contrib /build/ /tests/contrib/ COPY --from=builder /build/ /
thaJeztah
9bc0c4903f7f02ce287b9918f64795368e507f9d
2f74fa543b2f9ed6a3fd7c96afe3faa57b45f7e7
```suggestion inetutils-ping \ ```
tianon
4,498
moby/moby
42,763
Dockerfile: update syntax, switch to bullseye, add missing libseccomp-dev, remove build pack
So, this started with the intention to "just" update `buster` to `bullseye`, but finding various issues that needed fixing, or could be improved. ### Dockerfile: update to docker/dockerfile:1.3, and remove temporary fix. I saw we were using an older syntax, and the issue I reported (https://github.com/moby/buildkit/issues/2114) was fixed in dockerfile:1.3 front-end, so upgrading allowed me to remove the temporary fix. ### Dockerfile: remove aufs-tools, as it's not available on bullseye Well, title says all. No more aufs? ### Dockerfile: update to debian bullseye Well, that's what I came here for 😂 ### Dockerfile: add back libseccomp-dev to cross-compile runc Commit https://github.com/moby/moby/commit/7168d98c434af0a35e8c9a05dfb87bb40511a38a removed these, but I _think_ we overlooked that the same stage is used to build runc as well, so we likely need these. (but happy to remove if we really don't need them!) ### Dockerfile: frozen images: update to bullseye, remove buildpack-dep Update the frozen images to also be based on Debian bullseye. Using the "slim" variant (which looks to have all we're currently using), and remove the buildpack-dep frozen image. The buildpack-dep image is quite large, and it looks like we only use it to compile some C binaries, which should work fine on a regular debian image; docker build -t debian:bullseye-slim-gcc -<<EOF FROM debian:bullseye-slim RUN apt-get update && apt-get install -y gcc libc6-dev --no-install-recommends EOF docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE debian bullseye-slim-gcc 1851750242af About a minute ago 255MB buildpack-deps bullseye fe8fece98de2 2 days ago 834MB **- How to verify it** **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-08-19 22:01:49+00:00
2021-08-22 13:37:00+00:00
integration/build/build_userns_linux_test.go
package build // import "github.com/docker/docker/integration/build" import ( "bufio" "bytes" "context" "io" "io/ioutil" "os" "strings" "testing" "github.com/docker/docker/api/types" "github.com/docker/docker/integration/internal/container" "github.com/docker/docker/pkg/jsonmessage" "github.com/docker/docker/pkg/stdcopy" "github.com/docker/docker/testutil/daemon" "github.com/docker/docker/testutil/fakecontext" "github.com/docker/docker/testutil/fixtures/load" "gotest.tools/v3/assert" "gotest.tools/v3/skip" ) // Implements a test for https://github.com/moby/moby/issues/41723 // Images built in a user-namespaced daemon should have capabilities serialised in // VFS_CAP_REVISION_2 (no user-namespace root uid) format rather than V3 (that includes // the root uid). func TestBuildUserNamespaceValidateCapabilitiesAreV2(t *testing.T) { skip.If(t, testEnv.DaemonInfo.OSType != "linux") skip.If(t, testEnv.IsRemoteDaemon()) skip.If(t, !testEnv.IsUserNamespaceInKernel()) skip.If(t, testEnv.IsRootless()) const imageTag = "capabilities:1.0" tmp, err := ioutil.TempDir("", "integration-") assert.NilError(t, err) defer os.RemoveAll(tmp) dUserRemap := daemon.New(t) dUserRemap.Start(t, "--userns-remap", "default") ctx := context.Background() clientUserRemap := dUserRemap.NewClientT(t) err = load.FrozenImagesLinux(clientUserRemap, "debian:bullseye") assert.NilError(t, err) dUserRemapRunning := true defer func() { if dUserRemapRunning { dUserRemap.Stop(t) } }() dockerfile := ` FROM debian:bullseye RUN setcap CAP_NET_BIND_SERVICE=+eip /bin/sleep ` source := fakecontext.New(t, "", fakecontext.WithDockerfile(dockerfile)) defer source.Close() resp, err := clientUserRemap.ImageBuild(ctx, source.AsTarReader(t), types.ImageBuildOptions{ Tags: []string{imageTag}, }) assert.NilError(t, err) defer resp.Body.Close() buf := bytes.NewBuffer(nil) err = jsonmessage.DisplayJSONMessagesStream(resp.Body, buf, 0, false, nil) assert.NilError(t, err) reader, err := clientUserRemap.ImageSave(ctx, []string{imageTag}) assert.NilError(t, err, "failed to download capabilities image") defer reader.Close() tar, err := os.Create(tmp + "/image.tar") assert.NilError(t, err, "failed to create image tar file") defer tar.Close() _, err = io.Copy(tar, reader) assert.NilError(t, err, "failed to write image tar file") dUserRemap.Stop(t) dUserRemap.Cleanup(t) dUserRemapRunning = false dNoUserRemap := daemon.New(t) dNoUserRemap.Start(t) defer dNoUserRemap.Stop(t) clientNoUserRemap := dNoUserRemap.NewClientT(t) tarFile, err := os.Open(tmp + "/image.tar") assert.NilError(t, err, "failed to open image tar file") tarReader := bufio.NewReader(tarFile) loadResp, err := clientNoUserRemap.ImageLoad(ctx, tarReader, false) assert.NilError(t, err, "failed to load image tar file") defer loadResp.Body.Close() buf = bytes.NewBuffer(nil) err = jsonmessage.DisplayJSONMessagesStream(loadResp.Body, buf, 0, false, nil) assert.NilError(t, err) cid := container.Run(ctx, t, clientNoUserRemap, container.WithImage(imageTag), container.WithCmd("/sbin/getcap", "-n", "/bin/sleep"), ) logReader, err := clientNoUserRemap.ContainerLogs(ctx, cid, types.ContainerLogsOptions{ ShowStdout: true, }) assert.NilError(t, err) actualStdout := new(bytes.Buffer) actualStderr := ioutil.Discard _, err = stdcopy.StdCopy(actualStdout, actualStderr, logReader) assert.NilError(t, err) if strings.TrimSpace(actualStdout.String()) != "/bin/sleep cap_net_bind_service=eip" { t.Fatalf("run produced invalid output: %q, expected %q", actualStdout.String(), "/bin/sleep cap_net_bind_service=eip") } }
package build // import "github.com/docker/docker/integration/build" import ( "bufio" "bytes" "context" "io" "io/ioutil" "os" "strings" "testing" "github.com/docker/docker/api/types" "github.com/docker/docker/integration/internal/container" "github.com/docker/docker/pkg/jsonmessage" "github.com/docker/docker/pkg/stdcopy" "github.com/docker/docker/testutil/daemon" "github.com/docker/docker/testutil/fakecontext" "github.com/docker/docker/testutil/fixtures/load" "gotest.tools/v3/assert" "gotest.tools/v3/skip" ) // Implements a test for https://github.com/moby/moby/issues/41723 // Images built in a user-namespaced daemon should have capabilities serialised in // VFS_CAP_REVISION_2 (no user-namespace root uid) format rather than V3 (that includes // the root uid). func TestBuildUserNamespaceValidateCapabilitiesAreV2(t *testing.T) { skip.If(t, testEnv.DaemonInfo.OSType != "linux") skip.If(t, testEnv.IsRemoteDaemon()) skip.If(t, !testEnv.IsUserNamespaceInKernel()) skip.If(t, testEnv.IsRootless()) const imageTag = "capabilities:1.0" tmp, err := ioutil.TempDir("", "integration-") assert.NilError(t, err) defer os.RemoveAll(tmp) dUserRemap := daemon.New(t) dUserRemap.Start(t, "--userns-remap", "default") ctx := context.Background() clientUserRemap := dUserRemap.NewClientT(t) err = load.FrozenImagesLinux(clientUserRemap, "debian:bullseye-slim") assert.NilError(t, err) dUserRemapRunning := true defer func() { if dUserRemapRunning { dUserRemap.Stop(t) } }() dockerfile := ` FROM debian:bullseye-slim RUN apt-get update && apt-get install -y libcap2-bin --no-install-recommends RUN setcap CAP_NET_BIND_SERVICE=+eip /bin/sleep ` source := fakecontext.New(t, "", fakecontext.WithDockerfile(dockerfile)) defer source.Close() resp, err := clientUserRemap.ImageBuild(ctx, source.AsTarReader(t), types.ImageBuildOptions{ Tags: []string{imageTag}, }) assert.NilError(t, err) defer resp.Body.Close() buf := bytes.NewBuffer(nil) err = jsonmessage.DisplayJSONMessagesStream(resp.Body, buf, 0, false, nil) assert.NilError(t, err) reader, err := clientUserRemap.ImageSave(ctx, []string{imageTag}) assert.NilError(t, err, "failed to download capabilities image") defer reader.Close() tar, err := os.Create(tmp + "/image.tar") assert.NilError(t, err, "failed to create image tar file") defer tar.Close() _, err = io.Copy(tar, reader) assert.NilError(t, err, "failed to write image tar file") dUserRemap.Stop(t) dUserRemap.Cleanup(t) dUserRemapRunning = false dNoUserRemap := daemon.New(t) dNoUserRemap.Start(t) defer dNoUserRemap.Stop(t) clientNoUserRemap := dNoUserRemap.NewClientT(t) tarFile, err := os.Open(tmp + "/image.tar") assert.NilError(t, err, "failed to open image tar file") tarReader := bufio.NewReader(tarFile) loadResp, err := clientNoUserRemap.ImageLoad(ctx, tarReader, false) assert.NilError(t, err, "failed to load image tar file") defer loadResp.Body.Close() buf = bytes.NewBuffer(nil) err = jsonmessage.DisplayJSONMessagesStream(loadResp.Body, buf, 0, false, nil) assert.NilError(t, err) cid := container.Run(ctx, t, clientNoUserRemap, container.WithImage(imageTag), container.WithCmd("/sbin/getcap", "-n", "/bin/sleep"), ) logReader, err := clientNoUserRemap.ContainerLogs(ctx, cid, types.ContainerLogsOptions{ ShowStdout: true, }) assert.NilError(t, err) actualStdout := new(bytes.Buffer) actualStderr := ioutil.Discard _, err = stdcopy.StdCopy(actualStdout, actualStderr, logReader) assert.NilError(t, err) if strings.TrimSpace(actualStdout.String()) != "/bin/sleep cap_net_bind_service=eip" { t.Fatalf("run produced invalid output: %q, expected %q", actualStdout.String(), "/bin/sleep cap_net_bind_service=eip") } }
thaJeztah
9bc0c4903f7f02ce287b9918f64795368e507f9d
2f74fa543b2f9ed6a3fd7c96afe3faa57b45f7e7
Looks like `setcap` was removed from bullseye at some point; ```bash docker run --rm debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 sh -c 'setcap CAP_NET_BIND_SERVICE=+eip /bin/sleep' docker run --rm debian:bullseye sh -c 'setcap CAP_NET_BIND_SERVICE=+eip /bin/sleep' sh: 1: setcap: not found
thaJeztah
4,499
moby/moby
42,755
libnetwork: make resolvconf more self-contained
### libnetwork: move resolvconf consts into the resolvconf package This allows using the package without having to import the "types" package, and without having to consume github.com/ishidawataru/sctp. ### libnetwork: remove resolvconf/dns package The IsLocalhost utility was not used, and go's "net" package provides a `IsLoopBack()` check, which can be used in stead of `IsIPv4Localhost`. ### move pkg/ioutils.HashData() to libnetwork/resolvconf It's the only location it's used, so we might as well move it there. This also removes the "crypto/sha256" and "encoding/hex" dependencies from pkg/ioutils.. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-08-18 10:52:13+00:00
2021-08-20 08:05:29+00:00
libnetwork/sandbox_dns_unix.go
// +build !windows package libnetwork import ( "fmt" "io/ioutil" "os" "path" "path/filepath" "strconv" "strings" "github.com/docker/docker/libnetwork/etchosts" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/resolvconf/dns" "github.com/docker/docker/libnetwork/types" "github.com/sirupsen/logrus" ) const ( defaultPrefix = "/var/lib/docker/network/files" dirPerm = 0755 filePerm = 0644 ) func (sb *sandbox) startResolver(restore bool) { sb.resolverOnce.Do(func() { var err error sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb) defer func() { if err != nil { sb.resolver = nil } }() // In the case of live restore container is already running with // right resolv.conf contents created before. Just update the // external DNS servers from the restored sandbox for embedded // server to use. if !restore { err = sb.rebuildDNS() if err != nil { logrus.Errorf("Updating resolv.conf failed for container %s, %q", sb.ContainerID(), err) return } } sb.resolver.SetExtServers(sb.extDNS) if err = sb.osSbox.InvokeFunc(sb.resolver.SetupFunc(0)); err != nil { logrus.Errorf("Resolver Setup function failed for container %s, %q", sb.ContainerID(), err) return } if err = sb.resolver.Start(); err != nil { logrus.Errorf("Resolver Start failed for container %s, %q", sb.ContainerID(), err) } }) } func (sb *sandbox) setupResolutionFiles() error { if err := sb.buildHostsFile(); err != nil { return err } if err := sb.updateParentHosts(); err != nil { return err } return sb.setupDNS() } func (sb *sandbox) buildHostsFile() error { if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } dir, _ := filepath.Split(sb.config.hostsPath) if err := createBasePath(dir); err != nil { return err } // This is for the host mode networking if sb.config.useDefaultSandBox && len(sb.config.extraHosts) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originHostsPath, sb.config.hostsPath); err != nil && !os.IsNotExist(err) { return types.InternalErrorf("could not copy source hosts file %s to %s: %v", sb.config.originHostsPath, sb.config.hostsPath, err) } return nil } extraContent := make([]etchosts.Record, 0, len(sb.config.extraHosts)) for _, extraHost := range sb.config.extraHosts { extraContent = append(extraContent, etchosts.Record{Hosts: extraHost.name, IP: extraHost.IP}) } return etchosts.Build(sb.config.hostsPath, "", sb.config.hostName, sb.config.domainName, extraContent) } func (sb *sandbox) updateHostsFile(ifaceIPs []string) error { if len(ifaceIPs) == 0 { return nil } if sb.config.originHostsPath != "" { return nil } // User might have provided a FQDN in hostname or split it across hostname // and domainname. We want the FQDN and the bare hostname. fqdn := sb.config.hostName mhost := sb.config.hostName if sb.config.domainName != "" { fqdn = fmt.Sprintf("%s.%s", fqdn, sb.config.domainName) } parts := strings.SplitN(fqdn, ".", 2) if len(parts) == 2 { mhost = fmt.Sprintf("%s %s", fqdn, parts[0]) } var extraContent []etchosts.Record for _, ip := range ifaceIPs { extraContent = append(extraContent, etchosts.Record{Hosts: mhost, IP: ip}) } sb.addHostsEntries(extraContent) return nil } func (sb *sandbox) addHostsEntries(recs []etchosts.Record) { if err := etchosts.Add(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed adding service host entries to the running container: %v", err) } } func (sb *sandbox) deleteHostsEntries(recs []etchosts.Record) { if err := etchosts.Delete(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed deleting service host entries to the running container: %v", err) } } func (sb *sandbox) updateParentHosts() error { var pSb Sandbox for _, update := range sb.config.parentUpdates { sb.controller.WalkSandboxes(SandboxContainerWalker(&pSb, update.cid)) if pSb == nil { continue } if err := etchosts.Update(pSb.(*sandbox).config.hostsPath, update.ip, update.name); err != nil { return err } } return nil } func (sb *sandbox) restorePath() { if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } } func (sb *sandbox) setExternalResolvers(content []byte, addrType int, checkLoopback bool) { servers := resolvconf.GetNameservers(content, addrType) for _, ip := range servers { hostLoopback := false if checkLoopback { hostLoopback = dns.IsIPv4Localhost(ip) } sb.extDNS = append(sb.extDNS, extDNSEntry{ IPStr: ip, HostLoopback: hostLoopback, }) } } func (sb *sandbox) setupDNS() error { var newRC *resolvconf.File if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" dir, _ := filepath.Split(sb.config.resolvConfPath) if err := createBasePath(dir); err != nil { return err } // When the user specify a conainter in the host namespace and do no have any dns option specified // we just copy the host resolv.conf from the host itself if sb.config.useDefaultSandBox && len(sb.config.dnsList) == 0 && len(sb.config.dnsSearchList) == 0 && len(sb.config.dnsOptionsList) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originResolvConfPath, sb.config.resolvConfPath); err != nil { if !os.IsNotExist(err) { return fmt.Errorf("could not copy source resolv.conf file %s to %s: %v", sb.config.originResolvConfPath, sb.config.resolvConfPath, err) } logrus.Infof("%s does not exist, we create an empty resolv.conf for container", sb.config.originResolvConfPath) if err := createFile(sb.config.resolvConfPath); err != nil { return err } } return nil } originResolvConfPath := sb.config.originResolvConfPath if originResolvConfPath == "" { // fallback if not specified originResolvConfPath = resolvconf.Path() } currRC, err := resolvconf.GetSpecific(originResolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } // it's ok to continue if /etc/resolv.conf doesn't exist, default resolvers (Google's Public DNS) // will be used currRC = &resolvconf.File{} logrus.Infof("/etc/resolv.conf does not exist") } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { var ( err error dnsList = resolvconf.GetNameservers(currRC.Content, types.IP) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) dnsOptionsList = resolvconf.GetOptions(currRC.Content) ) if len(sb.config.dnsList) > 0 { dnsList = sb.config.dnsList } if len(sb.config.dnsSearchList) > 0 { dnsSearchList = sb.config.dnsSearchList } if len(sb.config.dnsOptionsList) > 0 { dnsOptionsList = sb.config.dnsOptionsList } newRC, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) if err != nil { return err } // After building the resolv.conf from the user config save the // external resolvers in the sandbox. Note that --dns 127.0.0.x // config refers to the loopback in the container namespace sb.setExternalResolvers(newRC.Content, types.IPv4, false) } else { // If the host resolv.conf file has 127.0.0.x container should // use the host resolver for queries. This is supported by the // docker embedded DNS server. Hence save the external resolvers // before filtering it out. sb.setExternalResolvers(currRC.Content, types.IPv4, true) // Replace any localhost/127.* (at this point we have no info about ipv6, pass it as true) if newRC, err = resolvconf.FilterResolvDNS(currRC.Content, true); err != nil { return err } // No contention on container resolv.conf file at sandbox creation if err := ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, filePerm); err != nil { return types.InternalErrorf("failed to write unhaltered resolv.conf file content when setting up dns for sandbox %s: %v", sb.ID(), err) } } // Write hash if err := ioutil.WriteFile(sb.config.resolvConfHashFile, []byte(newRC.Hash), filePerm); err != nil { return types.InternalErrorf("failed to write resolv.conf hash file when setting up dns for sandbox %s: %v", sb.ID(), err) } return nil } func (sb *sandbox) updateDNS(ipv6Enabled bool) error { var ( currHash string hashFile = sb.config.resolvConfHashFile ) // This is for the host mode networking if sb.config.useDefaultSandBox { return nil } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { return nil } currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } } else { h, err := ioutil.ReadFile(hashFile) if err != nil { if !os.IsNotExist(err) { return err } } else { currHash = string(h) } } if currHash != "" && currHash != currRC.Hash { // Seems the user has changed the container resolv.conf since the last time // we checked so return without doing anything. //logrus.Infof("Skipping update of resolv.conf file with ipv6Enabled: %t because file was touched by user", ipv6Enabled) return nil } // replace any localhost/127.* and remove IPv6 nameservers if IPv6 disabled. newRC, err := resolvconf.FilterResolvDNS(currRC.Content, ipv6Enabled) if err != nil { return err } err = ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, 0644) //nolint:gosec // gosec complains about perms here, which must be 0644 in this case if err != nil { return err } // write the new hash in a temp file and rename it to make the update atomic dir := path.Dir(sb.config.resolvConfPath) tmpHashFile, err := ioutil.TempFile(dir, "hash") if err != nil { return err } if err = tmpHashFile.Chmod(filePerm); err != nil { tmpHashFile.Close() return err } _, err = tmpHashFile.Write([]byte(newRC.Hash)) if err1 := tmpHashFile.Close(); err == nil { err = err1 } if err != nil { return err } return os.Rename(tmpHashFile.Name(), hashFile) } // Embedded DNS server has to be enabled for this sandbox. Rebuild the container's // resolv.conf by doing the following // - Add only the embedded server's IP to container's resolv.conf // - If the embedded server needs any resolv.conf options add it to the current list func (sb *sandbox) rebuildDNS() error { currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { return err } if len(sb.extDNS) == 0 { sb.setExternalResolvers(currRC.Content, types.IPv4, false) } var ( dnsList = []string{sb.resolver.NameServer()} dnsOptionsList = resolvconf.GetOptions(currRC.Content) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) ) // external v6 DNS servers has to be listed in resolv.conf dnsList = append(dnsList, resolvconf.GetNameservers(currRC.Content, types.IPv6)...) // If the user config and embedded DNS server both have ndots option set, // remember the user's config so that unqualified names not in the docker // domain can be dropped. resOptions := sb.resolver.ResolverOptions() dnsOpt: for _, resOpt := range resOptions { if strings.Contains(resOpt, "ndots") { for _, option := range dnsOptionsList { if strings.Contains(option, "ndots") { parts := strings.Split(option, ":") if len(parts) != 2 { return fmt.Errorf("invalid ndots option %v", option) } if num, err := strconv.Atoi(parts[1]); err != nil { return fmt.Errorf("invalid number for ndots option: %v", parts[1]) } else if num >= 0 { // if the user sets ndots, use the user setting sb.ndotsSet = true break dnsOpt } else { return fmt.Errorf("invalid number for ndots option: %v", num) } } } } } if !sb.ndotsSet { // if the user did not set the ndots, set it to 0 to prioritize the service name resolution // Ref: https://linux.die.net/man/5/resolv.conf dnsOptionsList = append(dnsOptionsList, resOptions...) } _, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) return err } func createBasePath(dir string) error { return os.MkdirAll(dir, dirPerm) } func createFile(path string) error { var f *os.File dir, _ := filepath.Split(path) err := createBasePath(dir) if err != nil { return err } f, err = os.Create(path) if err == nil { f.Close() } return err } func copyFile(src, dst string) error { sBytes, err := ioutil.ReadFile(src) if err != nil { return err } return ioutil.WriteFile(dst, sBytes, filePerm) }
// +build !windows package libnetwork import ( "fmt" "io/ioutil" "net" "os" "path" "path/filepath" "strconv" "strings" "github.com/docker/docker/libnetwork/etchosts" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/sirupsen/logrus" ) const ( defaultPrefix = "/var/lib/docker/network/files" dirPerm = 0755 filePerm = 0644 ) func (sb *sandbox) startResolver(restore bool) { sb.resolverOnce.Do(func() { var err error sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb) defer func() { if err != nil { sb.resolver = nil } }() // In the case of live restore container is already running with // right resolv.conf contents created before. Just update the // external DNS servers from the restored sandbox for embedded // server to use. if !restore { err = sb.rebuildDNS() if err != nil { logrus.Errorf("Updating resolv.conf failed for container %s, %q", sb.ContainerID(), err) return } } sb.resolver.SetExtServers(sb.extDNS) if err = sb.osSbox.InvokeFunc(sb.resolver.SetupFunc(0)); err != nil { logrus.Errorf("Resolver Setup function failed for container %s, %q", sb.ContainerID(), err) return } if err = sb.resolver.Start(); err != nil { logrus.Errorf("Resolver Start failed for container %s, %q", sb.ContainerID(), err) } }) } func (sb *sandbox) setupResolutionFiles() error { if err := sb.buildHostsFile(); err != nil { return err } if err := sb.updateParentHosts(); err != nil { return err } return sb.setupDNS() } func (sb *sandbox) buildHostsFile() error { if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } dir, _ := filepath.Split(sb.config.hostsPath) if err := createBasePath(dir); err != nil { return err } // This is for the host mode networking if sb.config.useDefaultSandBox && len(sb.config.extraHosts) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originHostsPath, sb.config.hostsPath); err != nil && !os.IsNotExist(err) { return types.InternalErrorf("could not copy source hosts file %s to %s: %v", sb.config.originHostsPath, sb.config.hostsPath, err) } return nil } extraContent := make([]etchosts.Record, 0, len(sb.config.extraHosts)) for _, extraHost := range sb.config.extraHosts { extraContent = append(extraContent, etchosts.Record{Hosts: extraHost.name, IP: extraHost.IP}) } return etchosts.Build(sb.config.hostsPath, "", sb.config.hostName, sb.config.domainName, extraContent) } func (sb *sandbox) updateHostsFile(ifaceIPs []string) error { if len(ifaceIPs) == 0 { return nil } if sb.config.originHostsPath != "" { return nil } // User might have provided a FQDN in hostname or split it across hostname // and domainname. We want the FQDN and the bare hostname. fqdn := sb.config.hostName mhost := sb.config.hostName if sb.config.domainName != "" { fqdn = fmt.Sprintf("%s.%s", fqdn, sb.config.domainName) } parts := strings.SplitN(fqdn, ".", 2) if len(parts) == 2 { mhost = fmt.Sprintf("%s %s", fqdn, parts[0]) } var extraContent []etchosts.Record for _, ip := range ifaceIPs { extraContent = append(extraContent, etchosts.Record{Hosts: mhost, IP: ip}) } sb.addHostsEntries(extraContent) return nil } func (sb *sandbox) addHostsEntries(recs []etchosts.Record) { if err := etchosts.Add(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed adding service host entries to the running container: %v", err) } } func (sb *sandbox) deleteHostsEntries(recs []etchosts.Record) { if err := etchosts.Delete(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed deleting service host entries to the running container: %v", err) } } func (sb *sandbox) updateParentHosts() error { var pSb Sandbox for _, update := range sb.config.parentUpdates { sb.controller.WalkSandboxes(SandboxContainerWalker(&pSb, update.cid)) if pSb == nil { continue } if err := etchosts.Update(pSb.(*sandbox).config.hostsPath, update.ip, update.name); err != nil { return err } } return nil } func (sb *sandbox) restorePath() { if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } } func (sb *sandbox) setExternalResolvers(content []byte, addrType int, checkLoopback bool) { servers := resolvconf.GetNameservers(content, addrType) for _, ip := range servers { hostLoopback := false if checkLoopback && isIPv4Loopback(ip) { hostLoopback = true } sb.extDNS = append(sb.extDNS, extDNSEntry{ IPStr: ip, HostLoopback: hostLoopback, }) } } // isIPv4Loopback checks if the given IP address is an IPv4 loopback address. // It's based on the logic in Go's net.IP.IsLoopback(), but only the IPv4 part: // https://github.com/golang/go/blob/go1.16.6/src/net/ip.go#L120-L126 func isIPv4Loopback(ipAddress string) bool { if ip := net.ParseIP(ipAddress); ip != nil { if ip4 := ip.To4(); ip4 != nil { return ip4[0] == 127 } } return false } func (sb *sandbox) setupDNS() error { var newRC *resolvconf.File if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" dir, _ := filepath.Split(sb.config.resolvConfPath) if err := createBasePath(dir); err != nil { return err } // When the user specify a conainter in the host namespace and do no have any dns option specified // we just copy the host resolv.conf from the host itself if sb.config.useDefaultSandBox && len(sb.config.dnsList) == 0 && len(sb.config.dnsSearchList) == 0 && len(sb.config.dnsOptionsList) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originResolvConfPath, sb.config.resolvConfPath); err != nil { if !os.IsNotExist(err) { return fmt.Errorf("could not copy source resolv.conf file %s to %s: %v", sb.config.originResolvConfPath, sb.config.resolvConfPath, err) } logrus.Infof("%s does not exist, we create an empty resolv.conf for container", sb.config.originResolvConfPath) if err := createFile(sb.config.resolvConfPath); err != nil { return err } } return nil } originResolvConfPath := sb.config.originResolvConfPath if originResolvConfPath == "" { // fallback if not specified originResolvConfPath = resolvconf.Path() } currRC, err := resolvconf.GetSpecific(originResolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } // it's ok to continue if /etc/resolv.conf doesn't exist, default resolvers (Google's Public DNS) // will be used currRC = &resolvconf.File{} logrus.Infof("/etc/resolv.conf does not exist") } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { var ( err error dnsList = resolvconf.GetNameservers(currRC.Content, resolvconf.IP) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) dnsOptionsList = resolvconf.GetOptions(currRC.Content) ) if len(sb.config.dnsList) > 0 { dnsList = sb.config.dnsList } if len(sb.config.dnsSearchList) > 0 { dnsSearchList = sb.config.dnsSearchList } if len(sb.config.dnsOptionsList) > 0 { dnsOptionsList = sb.config.dnsOptionsList } newRC, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) if err != nil { return err } // After building the resolv.conf from the user config save the // external resolvers in the sandbox. Note that --dns 127.0.0.x // config refers to the loopback in the container namespace sb.setExternalResolvers(newRC.Content, resolvconf.IPv4, false) } else { // If the host resolv.conf file has 127.0.0.x container should // use the host resolver for queries. This is supported by the // docker embedded DNS server. Hence save the external resolvers // before filtering it out. sb.setExternalResolvers(currRC.Content, resolvconf.IPv4, true) // Replace any localhost/127.* (at this point we have no info about ipv6, pass it as true) if newRC, err = resolvconf.FilterResolvDNS(currRC.Content, true); err != nil { return err } // No contention on container resolv.conf file at sandbox creation if err := ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, filePerm); err != nil { return types.InternalErrorf("failed to write unhaltered resolv.conf file content when setting up dns for sandbox %s: %v", sb.ID(), err) } } // Write hash if err := ioutil.WriteFile(sb.config.resolvConfHashFile, []byte(newRC.Hash), filePerm); err != nil { return types.InternalErrorf("failed to write resolv.conf hash file when setting up dns for sandbox %s: %v", sb.ID(), err) } return nil } func (sb *sandbox) updateDNS(ipv6Enabled bool) error { var ( currHash string hashFile = sb.config.resolvConfHashFile ) // This is for the host mode networking if sb.config.useDefaultSandBox { return nil } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { return nil } currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } } else { h, err := ioutil.ReadFile(hashFile) if err != nil { if !os.IsNotExist(err) { return err } } else { currHash = string(h) } } if currHash != "" && currHash != currRC.Hash { // Seems the user has changed the container resolv.conf since the last time // we checked so return without doing anything. //logrus.Infof("Skipping update of resolv.conf file with ipv6Enabled: %t because file was touched by user", ipv6Enabled) return nil } // replace any localhost/127.* and remove IPv6 nameservers if IPv6 disabled. newRC, err := resolvconf.FilterResolvDNS(currRC.Content, ipv6Enabled) if err != nil { return err } err = ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, 0644) //nolint:gosec // gosec complains about perms here, which must be 0644 in this case if err != nil { return err } // write the new hash in a temp file and rename it to make the update atomic dir := path.Dir(sb.config.resolvConfPath) tmpHashFile, err := ioutil.TempFile(dir, "hash") if err != nil { return err } if err = tmpHashFile.Chmod(filePerm); err != nil { tmpHashFile.Close() return err } _, err = tmpHashFile.Write([]byte(newRC.Hash)) if err1 := tmpHashFile.Close(); err == nil { err = err1 } if err != nil { return err } return os.Rename(tmpHashFile.Name(), hashFile) } // Embedded DNS server has to be enabled for this sandbox. Rebuild the container's // resolv.conf by doing the following // - Add only the embedded server's IP to container's resolv.conf // - If the embedded server needs any resolv.conf options add it to the current list func (sb *sandbox) rebuildDNS() error { currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { return err } if len(sb.extDNS) == 0 { sb.setExternalResolvers(currRC.Content, resolvconf.IPv4, false) } var ( dnsList = []string{sb.resolver.NameServer()} dnsOptionsList = resolvconf.GetOptions(currRC.Content) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) ) // external v6 DNS servers has to be listed in resolv.conf dnsList = append(dnsList, resolvconf.GetNameservers(currRC.Content, resolvconf.IPv6)...) // If the user config and embedded DNS server both have ndots option set, // remember the user's config so that unqualified names not in the docker // domain can be dropped. resOptions := sb.resolver.ResolverOptions() dnsOpt: for _, resOpt := range resOptions { if strings.Contains(resOpt, "ndots") { for _, option := range dnsOptionsList { if strings.Contains(option, "ndots") { parts := strings.Split(option, ":") if len(parts) != 2 { return fmt.Errorf("invalid ndots option %v", option) } if num, err := strconv.Atoi(parts[1]); err != nil { return fmt.Errorf("invalid number for ndots option: %v", parts[1]) } else if num >= 0 { // if the user sets ndots, use the user setting sb.ndotsSet = true break dnsOpt } else { return fmt.Errorf("invalid number for ndots option: %v", num) } } } } } if !sb.ndotsSet { // if the user did not set the ndots, set it to 0 to prioritize the service name resolution // Ref: https://linux.die.net/man/5/resolv.conf dnsOptionsList = append(dnsOptionsList, resOptions...) } _, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) return err } func createBasePath(dir string) error { return os.MkdirAll(dir, dirPerm) } func createFile(path string) error { var f *os.File dir, _ := filepath.Split(path) err := createBasePath(dir) if err != nil { return err } f, err = os.Create(path) if err == nil { f.Close() } return err } func copyFile(src, dst string) error { sBytes, err := ioutil.ReadFile(src) if err != nil { return err } return ioutil.WriteFile(dst, sBytes, filePerm) }
thaJeztah
b6919cb55320fb726d0012e06db50776aef52813
5ea3e12b63511846ac38787af8bdff89c66cc061
Isn't this different since it could be an ipv6 loopback?
cpuguy83
4,500
moby/moby
42,755
libnetwork: make resolvconf more self-contained
### libnetwork: move resolvconf consts into the resolvconf package This allows using the package without having to import the "types" package, and without having to consume github.com/ishidawataru/sctp. ### libnetwork: remove resolvconf/dns package The IsLocalhost utility was not used, and go's "net" package provides a `IsLoopBack()` check, which can be used in stead of `IsIPv4Localhost`. ### move pkg/ioutils.HashData() to libnetwork/resolvconf It's the only location it's used, so we might as well move it there. This also removes the "crypto/sha256" and "encoding/hex" dependencies from pkg/ioutils.. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-08-18 10:52:13+00:00
2021-08-20 08:05:29+00:00
libnetwork/sandbox_dns_unix.go
// +build !windows package libnetwork import ( "fmt" "io/ioutil" "os" "path" "path/filepath" "strconv" "strings" "github.com/docker/docker/libnetwork/etchosts" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/resolvconf/dns" "github.com/docker/docker/libnetwork/types" "github.com/sirupsen/logrus" ) const ( defaultPrefix = "/var/lib/docker/network/files" dirPerm = 0755 filePerm = 0644 ) func (sb *sandbox) startResolver(restore bool) { sb.resolverOnce.Do(func() { var err error sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb) defer func() { if err != nil { sb.resolver = nil } }() // In the case of live restore container is already running with // right resolv.conf contents created before. Just update the // external DNS servers from the restored sandbox for embedded // server to use. if !restore { err = sb.rebuildDNS() if err != nil { logrus.Errorf("Updating resolv.conf failed for container %s, %q", sb.ContainerID(), err) return } } sb.resolver.SetExtServers(sb.extDNS) if err = sb.osSbox.InvokeFunc(sb.resolver.SetupFunc(0)); err != nil { logrus.Errorf("Resolver Setup function failed for container %s, %q", sb.ContainerID(), err) return } if err = sb.resolver.Start(); err != nil { logrus.Errorf("Resolver Start failed for container %s, %q", sb.ContainerID(), err) } }) } func (sb *sandbox) setupResolutionFiles() error { if err := sb.buildHostsFile(); err != nil { return err } if err := sb.updateParentHosts(); err != nil { return err } return sb.setupDNS() } func (sb *sandbox) buildHostsFile() error { if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } dir, _ := filepath.Split(sb.config.hostsPath) if err := createBasePath(dir); err != nil { return err } // This is for the host mode networking if sb.config.useDefaultSandBox && len(sb.config.extraHosts) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originHostsPath, sb.config.hostsPath); err != nil && !os.IsNotExist(err) { return types.InternalErrorf("could not copy source hosts file %s to %s: %v", sb.config.originHostsPath, sb.config.hostsPath, err) } return nil } extraContent := make([]etchosts.Record, 0, len(sb.config.extraHosts)) for _, extraHost := range sb.config.extraHosts { extraContent = append(extraContent, etchosts.Record{Hosts: extraHost.name, IP: extraHost.IP}) } return etchosts.Build(sb.config.hostsPath, "", sb.config.hostName, sb.config.domainName, extraContent) } func (sb *sandbox) updateHostsFile(ifaceIPs []string) error { if len(ifaceIPs) == 0 { return nil } if sb.config.originHostsPath != "" { return nil } // User might have provided a FQDN in hostname or split it across hostname // and domainname. We want the FQDN and the bare hostname. fqdn := sb.config.hostName mhost := sb.config.hostName if sb.config.domainName != "" { fqdn = fmt.Sprintf("%s.%s", fqdn, sb.config.domainName) } parts := strings.SplitN(fqdn, ".", 2) if len(parts) == 2 { mhost = fmt.Sprintf("%s %s", fqdn, parts[0]) } var extraContent []etchosts.Record for _, ip := range ifaceIPs { extraContent = append(extraContent, etchosts.Record{Hosts: mhost, IP: ip}) } sb.addHostsEntries(extraContent) return nil } func (sb *sandbox) addHostsEntries(recs []etchosts.Record) { if err := etchosts.Add(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed adding service host entries to the running container: %v", err) } } func (sb *sandbox) deleteHostsEntries(recs []etchosts.Record) { if err := etchosts.Delete(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed deleting service host entries to the running container: %v", err) } } func (sb *sandbox) updateParentHosts() error { var pSb Sandbox for _, update := range sb.config.parentUpdates { sb.controller.WalkSandboxes(SandboxContainerWalker(&pSb, update.cid)) if pSb == nil { continue } if err := etchosts.Update(pSb.(*sandbox).config.hostsPath, update.ip, update.name); err != nil { return err } } return nil } func (sb *sandbox) restorePath() { if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } } func (sb *sandbox) setExternalResolvers(content []byte, addrType int, checkLoopback bool) { servers := resolvconf.GetNameservers(content, addrType) for _, ip := range servers { hostLoopback := false if checkLoopback { hostLoopback = dns.IsIPv4Localhost(ip) } sb.extDNS = append(sb.extDNS, extDNSEntry{ IPStr: ip, HostLoopback: hostLoopback, }) } } func (sb *sandbox) setupDNS() error { var newRC *resolvconf.File if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" dir, _ := filepath.Split(sb.config.resolvConfPath) if err := createBasePath(dir); err != nil { return err } // When the user specify a conainter in the host namespace and do no have any dns option specified // we just copy the host resolv.conf from the host itself if sb.config.useDefaultSandBox && len(sb.config.dnsList) == 0 && len(sb.config.dnsSearchList) == 0 && len(sb.config.dnsOptionsList) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originResolvConfPath, sb.config.resolvConfPath); err != nil { if !os.IsNotExist(err) { return fmt.Errorf("could not copy source resolv.conf file %s to %s: %v", sb.config.originResolvConfPath, sb.config.resolvConfPath, err) } logrus.Infof("%s does not exist, we create an empty resolv.conf for container", sb.config.originResolvConfPath) if err := createFile(sb.config.resolvConfPath); err != nil { return err } } return nil } originResolvConfPath := sb.config.originResolvConfPath if originResolvConfPath == "" { // fallback if not specified originResolvConfPath = resolvconf.Path() } currRC, err := resolvconf.GetSpecific(originResolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } // it's ok to continue if /etc/resolv.conf doesn't exist, default resolvers (Google's Public DNS) // will be used currRC = &resolvconf.File{} logrus.Infof("/etc/resolv.conf does not exist") } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { var ( err error dnsList = resolvconf.GetNameservers(currRC.Content, types.IP) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) dnsOptionsList = resolvconf.GetOptions(currRC.Content) ) if len(sb.config.dnsList) > 0 { dnsList = sb.config.dnsList } if len(sb.config.dnsSearchList) > 0 { dnsSearchList = sb.config.dnsSearchList } if len(sb.config.dnsOptionsList) > 0 { dnsOptionsList = sb.config.dnsOptionsList } newRC, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) if err != nil { return err } // After building the resolv.conf from the user config save the // external resolvers in the sandbox. Note that --dns 127.0.0.x // config refers to the loopback in the container namespace sb.setExternalResolvers(newRC.Content, types.IPv4, false) } else { // If the host resolv.conf file has 127.0.0.x container should // use the host resolver for queries. This is supported by the // docker embedded DNS server. Hence save the external resolvers // before filtering it out. sb.setExternalResolvers(currRC.Content, types.IPv4, true) // Replace any localhost/127.* (at this point we have no info about ipv6, pass it as true) if newRC, err = resolvconf.FilterResolvDNS(currRC.Content, true); err != nil { return err } // No contention on container resolv.conf file at sandbox creation if err := ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, filePerm); err != nil { return types.InternalErrorf("failed to write unhaltered resolv.conf file content when setting up dns for sandbox %s: %v", sb.ID(), err) } } // Write hash if err := ioutil.WriteFile(sb.config.resolvConfHashFile, []byte(newRC.Hash), filePerm); err != nil { return types.InternalErrorf("failed to write resolv.conf hash file when setting up dns for sandbox %s: %v", sb.ID(), err) } return nil } func (sb *sandbox) updateDNS(ipv6Enabled bool) error { var ( currHash string hashFile = sb.config.resolvConfHashFile ) // This is for the host mode networking if sb.config.useDefaultSandBox { return nil } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { return nil } currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } } else { h, err := ioutil.ReadFile(hashFile) if err != nil { if !os.IsNotExist(err) { return err } } else { currHash = string(h) } } if currHash != "" && currHash != currRC.Hash { // Seems the user has changed the container resolv.conf since the last time // we checked so return without doing anything. //logrus.Infof("Skipping update of resolv.conf file with ipv6Enabled: %t because file was touched by user", ipv6Enabled) return nil } // replace any localhost/127.* and remove IPv6 nameservers if IPv6 disabled. newRC, err := resolvconf.FilterResolvDNS(currRC.Content, ipv6Enabled) if err != nil { return err } err = ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, 0644) //nolint:gosec // gosec complains about perms here, which must be 0644 in this case if err != nil { return err } // write the new hash in a temp file and rename it to make the update atomic dir := path.Dir(sb.config.resolvConfPath) tmpHashFile, err := ioutil.TempFile(dir, "hash") if err != nil { return err } if err = tmpHashFile.Chmod(filePerm); err != nil { tmpHashFile.Close() return err } _, err = tmpHashFile.Write([]byte(newRC.Hash)) if err1 := tmpHashFile.Close(); err == nil { err = err1 } if err != nil { return err } return os.Rename(tmpHashFile.Name(), hashFile) } // Embedded DNS server has to be enabled for this sandbox. Rebuild the container's // resolv.conf by doing the following // - Add only the embedded server's IP to container's resolv.conf // - If the embedded server needs any resolv.conf options add it to the current list func (sb *sandbox) rebuildDNS() error { currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { return err } if len(sb.extDNS) == 0 { sb.setExternalResolvers(currRC.Content, types.IPv4, false) } var ( dnsList = []string{sb.resolver.NameServer()} dnsOptionsList = resolvconf.GetOptions(currRC.Content) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) ) // external v6 DNS servers has to be listed in resolv.conf dnsList = append(dnsList, resolvconf.GetNameservers(currRC.Content, types.IPv6)...) // If the user config and embedded DNS server both have ndots option set, // remember the user's config so that unqualified names not in the docker // domain can be dropped. resOptions := sb.resolver.ResolverOptions() dnsOpt: for _, resOpt := range resOptions { if strings.Contains(resOpt, "ndots") { for _, option := range dnsOptionsList { if strings.Contains(option, "ndots") { parts := strings.Split(option, ":") if len(parts) != 2 { return fmt.Errorf("invalid ndots option %v", option) } if num, err := strconv.Atoi(parts[1]); err != nil { return fmt.Errorf("invalid number for ndots option: %v", parts[1]) } else if num >= 0 { // if the user sets ndots, use the user setting sb.ndotsSet = true break dnsOpt } else { return fmt.Errorf("invalid number for ndots option: %v", num) } } } } } if !sb.ndotsSet { // if the user did not set the ndots, set it to 0 to prioritize the service name resolution // Ref: https://linux.die.net/man/5/resolv.conf dnsOptionsList = append(dnsOptionsList, resOptions...) } _, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) return err } func createBasePath(dir string) error { return os.MkdirAll(dir, dirPerm) } func createFile(path string) error { var f *os.File dir, _ := filepath.Split(path) err := createBasePath(dir) if err != nil { return err } f, err = os.Create(path) if err == nil { f.Close() } return err } func copyFile(src, dst string) error { sBytes, err := ioutil.ReadFile(src) if err != nil { return err } return ioutil.WriteFile(dst, sBytes, filePerm) }
// +build !windows package libnetwork import ( "fmt" "io/ioutil" "net" "os" "path" "path/filepath" "strconv" "strings" "github.com/docker/docker/libnetwork/etchosts" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/sirupsen/logrus" ) const ( defaultPrefix = "/var/lib/docker/network/files" dirPerm = 0755 filePerm = 0644 ) func (sb *sandbox) startResolver(restore bool) { sb.resolverOnce.Do(func() { var err error sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb) defer func() { if err != nil { sb.resolver = nil } }() // In the case of live restore container is already running with // right resolv.conf contents created before. Just update the // external DNS servers from the restored sandbox for embedded // server to use. if !restore { err = sb.rebuildDNS() if err != nil { logrus.Errorf("Updating resolv.conf failed for container %s, %q", sb.ContainerID(), err) return } } sb.resolver.SetExtServers(sb.extDNS) if err = sb.osSbox.InvokeFunc(sb.resolver.SetupFunc(0)); err != nil { logrus.Errorf("Resolver Setup function failed for container %s, %q", sb.ContainerID(), err) return } if err = sb.resolver.Start(); err != nil { logrus.Errorf("Resolver Start failed for container %s, %q", sb.ContainerID(), err) } }) } func (sb *sandbox) setupResolutionFiles() error { if err := sb.buildHostsFile(); err != nil { return err } if err := sb.updateParentHosts(); err != nil { return err } return sb.setupDNS() } func (sb *sandbox) buildHostsFile() error { if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } dir, _ := filepath.Split(sb.config.hostsPath) if err := createBasePath(dir); err != nil { return err } // This is for the host mode networking if sb.config.useDefaultSandBox && len(sb.config.extraHosts) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originHostsPath, sb.config.hostsPath); err != nil && !os.IsNotExist(err) { return types.InternalErrorf("could not copy source hosts file %s to %s: %v", sb.config.originHostsPath, sb.config.hostsPath, err) } return nil } extraContent := make([]etchosts.Record, 0, len(sb.config.extraHosts)) for _, extraHost := range sb.config.extraHosts { extraContent = append(extraContent, etchosts.Record{Hosts: extraHost.name, IP: extraHost.IP}) } return etchosts.Build(sb.config.hostsPath, "", sb.config.hostName, sb.config.domainName, extraContent) } func (sb *sandbox) updateHostsFile(ifaceIPs []string) error { if len(ifaceIPs) == 0 { return nil } if sb.config.originHostsPath != "" { return nil } // User might have provided a FQDN in hostname or split it across hostname // and domainname. We want the FQDN and the bare hostname. fqdn := sb.config.hostName mhost := sb.config.hostName if sb.config.domainName != "" { fqdn = fmt.Sprintf("%s.%s", fqdn, sb.config.domainName) } parts := strings.SplitN(fqdn, ".", 2) if len(parts) == 2 { mhost = fmt.Sprintf("%s %s", fqdn, parts[0]) } var extraContent []etchosts.Record for _, ip := range ifaceIPs { extraContent = append(extraContent, etchosts.Record{Hosts: mhost, IP: ip}) } sb.addHostsEntries(extraContent) return nil } func (sb *sandbox) addHostsEntries(recs []etchosts.Record) { if err := etchosts.Add(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed adding service host entries to the running container: %v", err) } } func (sb *sandbox) deleteHostsEntries(recs []etchosts.Record) { if err := etchosts.Delete(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed deleting service host entries to the running container: %v", err) } } func (sb *sandbox) updateParentHosts() error { var pSb Sandbox for _, update := range sb.config.parentUpdates { sb.controller.WalkSandboxes(SandboxContainerWalker(&pSb, update.cid)) if pSb == nil { continue } if err := etchosts.Update(pSb.(*sandbox).config.hostsPath, update.ip, update.name); err != nil { return err } } return nil } func (sb *sandbox) restorePath() { if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } } func (sb *sandbox) setExternalResolvers(content []byte, addrType int, checkLoopback bool) { servers := resolvconf.GetNameservers(content, addrType) for _, ip := range servers { hostLoopback := false if checkLoopback && isIPv4Loopback(ip) { hostLoopback = true } sb.extDNS = append(sb.extDNS, extDNSEntry{ IPStr: ip, HostLoopback: hostLoopback, }) } } // isIPv4Loopback checks if the given IP address is an IPv4 loopback address. // It's based on the logic in Go's net.IP.IsLoopback(), but only the IPv4 part: // https://github.com/golang/go/blob/go1.16.6/src/net/ip.go#L120-L126 func isIPv4Loopback(ipAddress string) bool { if ip := net.ParseIP(ipAddress); ip != nil { if ip4 := ip.To4(); ip4 != nil { return ip4[0] == 127 } } return false } func (sb *sandbox) setupDNS() error { var newRC *resolvconf.File if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" dir, _ := filepath.Split(sb.config.resolvConfPath) if err := createBasePath(dir); err != nil { return err } // When the user specify a conainter in the host namespace and do no have any dns option specified // we just copy the host resolv.conf from the host itself if sb.config.useDefaultSandBox && len(sb.config.dnsList) == 0 && len(sb.config.dnsSearchList) == 0 && len(sb.config.dnsOptionsList) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originResolvConfPath, sb.config.resolvConfPath); err != nil { if !os.IsNotExist(err) { return fmt.Errorf("could not copy source resolv.conf file %s to %s: %v", sb.config.originResolvConfPath, sb.config.resolvConfPath, err) } logrus.Infof("%s does not exist, we create an empty resolv.conf for container", sb.config.originResolvConfPath) if err := createFile(sb.config.resolvConfPath); err != nil { return err } } return nil } originResolvConfPath := sb.config.originResolvConfPath if originResolvConfPath == "" { // fallback if not specified originResolvConfPath = resolvconf.Path() } currRC, err := resolvconf.GetSpecific(originResolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } // it's ok to continue if /etc/resolv.conf doesn't exist, default resolvers (Google's Public DNS) // will be used currRC = &resolvconf.File{} logrus.Infof("/etc/resolv.conf does not exist") } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { var ( err error dnsList = resolvconf.GetNameservers(currRC.Content, resolvconf.IP) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) dnsOptionsList = resolvconf.GetOptions(currRC.Content) ) if len(sb.config.dnsList) > 0 { dnsList = sb.config.dnsList } if len(sb.config.dnsSearchList) > 0 { dnsSearchList = sb.config.dnsSearchList } if len(sb.config.dnsOptionsList) > 0 { dnsOptionsList = sb.config.dnsOptionsList } newRC, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) if err != nil { return err } // After building the resolv.conf from the user config save the // external resolvers in the sandbox. Note that --dns 127.0.0.x // config refers to the loopback in the container namespace sb.setExternalResolvers(newRC.Content, resolvconf.IPv4, false) } else { // If the host resolv.conf file has 127.0.0.x container should // use the host resolver for queries. This is supported by the // docker embedded DNS server. Hence save the external resolvers // before filtering it out. sb.setExternalResolvers(currRC.Content, resolvconf.IPv4, true) // Replace any localhost/127.* (at this point we have no info about ipv6, pass it as true) if newRC, err = resolvconf.FilterResolvDNS(currRC.Content, true); err != nil { return err } // No contention on container resolv.conf file at sandbox creation if err := ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, filePerm); err != nil { return types.InternalErrorf("failed to write unhaltered resolv.conf file content when setting up dns for sandbox %s: %v", sb.ID(), err) } } // Write hash if err := ioutil.WriteFile(sb.config.resolvConfHashFile, []byte(newRC.Hash), filePerm); err != nil { return types.InternalErrorf("failed to write resolv.conf hash file when setting up dns for sandbox %s: %v", sb.ID(), err) } return nil } func (sb *sandbox) updateDNS(ipv6Enabled bool) error { var ( currHash string hashFile = sb.config.resolvConfHashFile ) // This is for the host mode networking if sb.config.useDefaultSandBox { return nil } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { return nil } currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } } else { h, err := ioutil.ReadFile(hashFile) if err != nil { if !os.IsNotExist(err) { return err } } else { currHash = string(h) } } if currHash != "" && currHash != currRC.Hash { // Seems the user has changed the container resolv.conf since the last time // we checked so return without doing anything. //logrus.Infof("Skipping update of resolv.conf file with ipv6Enabled: %t because file was touched by user", ipv6Enabled) return nil } // replace any localhost/127.* and remove IPv6 nameservers if IPv6 disabled. newRC, err := resolvconf.FilterResolvDNS(currRC.Content, ipv6Enabled) if err != nil { return err } err = ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, 0644) //nolint:gosec // gosec complains about perms here, which must be 0644 in this case if err != nil { return err } // write the new hash in a temp file and rename it to make the update atomic dir := path.Dir(sb.config.resolvConfPath) tmpHashFile, err := ioutil.TempFile(dir, "hash") if err != nil { return err } if err = tmpHashFile.Chmod(filePerm); err != nil { tmpHashFile.Close() return err } _, err = tmpHashFile.Write([]byte(newRC.Hash)) if err1 := tmpHashFile.Close(); err == nil { err = err1 } if err != nil { return err } return os.Rename(tmpHashFile.Name(), hashFile) } // Embedded DNS server has to be enabled for this sandbox. Rebuild the container's // resolv.conf by doing the following // - Add only the embedded server's IP to container's resolv.conf // - If the embedded server needs any resolv.conf options add it to the current list func (sb *sandbox) rebuildDNS() error { currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { return err } if len(sb.extDNS) == 0 { sb.setExternalResolvers(currRC.Content, resolvconf.IPv4, false) } var ( dnsList = []string{sb.resolver.NameServer()} dnsOptionsList = resolvconf.GetOptions(currRC.Content) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) ) // external v6 DNS servers has to be listed in resolv.conf dnsList = append(dnsList, resolvconf.GetNameservers(currRC.Content, resolvconf.IPv6)...) // If the user config and embedded DNS server both have ndots option set, // remember the user's config so that unqualified names not in the docker // domain can be dropped. resOptions := sb.resolver.ResolverOptions() dnsOpt: for _, resOpt := range resOptions { if strings.Contains(resOpt, "ndots") { for _, option := range dnsOptionsList { if strings.Contains(option, "ndots") { parts := strings.Split(option, ":") if len(parts) != 2 { return fmt.Errorf("invalid ndots option %v", option) } if num, err := strconv.Atoi(parts[1]); err != nil { return fmt.Errorf("invalid number for ndots option: %v", parts[1]) } else if num >= 0 { // if the user sets ndots, use the user setting sb.ndotsSet = true break dnsOpt } else { return fmt.Errorf("invalid number for ndots option: %v", num) } } } } } if !sb.ndotsSet { // if the user did not set the ndots, set it to 0 to prioritize the service name resolution // Ref: https://linux.die.net/man/5/resolv.conf dnsOptionsList = append(dnsOptionsList, resOptions...) } _, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) return err } func createBasePath(dir string) error { return os.MkdirAll(dir, dirPerm) } func createFile(path string) error { var f *os.File dir, _ := filepath.Split(path) err := createBasePath(dir) if err != nil { return err } f, err = os.Create(path) if err == nil { f.Close() } return err } func copyFile(src, dst string) error { sBytes, err := ioutil.ReadFile(src) if err != nil { return err } return ioutil.WriteFile(dst, sBytes, filePerm) }
thaJeztah
b6919cb55320fb726d0012e06db50776aef52813
5ea3e12b63511846ac38787af8bdff89c66cc061
Ah, yes. Brainfart; I was thinking this was to _skip_ loopback IP-addresses (as they won't work as DNS for container), but this is the bit where we forward requests to them. Yup, that shouldn't use IPv6 likely. Or maybe it should, but we can do that separately.
thaJeztah
4,501
moby/moby
42,755
libnetwork: make resolvconf more self-contained
### libnetwork: move resolvconf consts into the resolvconf package This allows using the package without having to import the "types" package, and without having to consume github.com/ishidawataru/sctp. ### libnetwork: remove resolvconf/dns package The IsLocalhost utility was not used, and go's "net" package provides a `IsLoopBack()` check, which can be used in stead of `IsIPv4Localhost`. ### move pkg/ioutils.HashData() to libnetwork/resolvconf It's the only location it's used, so we might as well move it there. This also removes the "crypto/sha256" and "encoding/hex" dependencies from pkg/ioutils.. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-08-18 10:52:13+00:00
2021-08-20 08:05:29+00:00
libnetwork/sandbox_dns_unix.go
// +build !windows package libnetwork import ( "fmt" "io/ioutil" "os" "path" "path/filepath" "strconv" "strings" "github.com/docker/docker/libnetwork/etchosts" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/resolvconf/dns" "github.com/docker/docker/libnetwork/types" "github.com/sirupsen/logrus" ) const ( defaultPrefix = "/var/lib/docker/network/files" dirPerm = 0755 filePerm = 0644 ) func (sb *sandbox) startResolver(restore bool) { sb.resolverOnce.Do(func() { var err error sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb) defer func() { if err != nil { sb.resolver = nil } }() // In the case of live restore container is already running with // right resolv.conf contents created before. Just update the // external DNS servers from the restored sandbox for embedded // server to use. if !restore { err = sb.rebuildDNS() if err != nil { logrus.Errorf("Updating resolv.conf failed for container %s, %q", sb.ContainerID(), err) return } } sb.resolver.SetExtServers(sb.extDNS) if err = sb.osSbox.InvokeFunc(sb.resolver.SetupFunc(0)); err != nil { logrus.Errorf("Resolver Setup function failed for container %s, %q", sb.ContainerID(), err) return } if err = sb.resolver.Start(); err != nil { logrus.Errorf("Resolver Start failed for container %s, %q", sb.ContainerID(), err) } }) } func (sb *sandbox) setupResolutionFiles() error { if err := sb.buildHostsFile(); err != nil { return err } if err := sb.updateParentHosts(); err != nil { return err } return sb.setupDNS() } func (sb *sandbox) buildHostsFile() error { if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } dir, _ := filepath.Split(sb.config.hostsPath) if err := createBasePath(dir); err != nil { return err } // This is for the host mode networking if sb.config.useDefaultSandBox && len(sb.config.extraHosts) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originHostsPath, sb.config.hostsPath); err != nil && !os.IsNotExist(err) { return types.InternalErrorf("could not copy source hosts file %s to %s: %v", sb.config.originHostsPath, sb.config.hostsPath, err) } return nil } extraContent := make([]etchosts.Record, 0, len(sb.config.extraHosts)) for _, extraHost := range sb.config.extraHosts { extraContent = append(extraContent, etchosts.Record{Hosts: extraHost.name, IP: extraHost.IP}) } return etchosts.Build(sb.config.hostsPath, "", sb.config.hostName, sb.config.domainName, extraContent) } func (sb *sandbox) updateHostsFile(ifaceIPs []string) error { if len(ifaceIPs) == 0 { return nil } if sb.config.originHostsPath != "" { return nil } // User might have provided a FQDN in hostname or split it across hostname // and domainname. We want the FQDN and the bare hostname. fqdn := sb.config.hostName mhost := sb.config.hostName if sb.config.domainName != "" { fqdn = fmt.Sprintf("%s.%s", fqdn, sb.config.domainName) } parts := strings.SplitN(fqdn, ".", 2) if len(parts) == 2 { mhost = fmt.Sprintf("%s %s", fqdn, parts[0]) } var extraContent []etchosts.Record for _, ip := range ifaceIPs { extraContent = append(extraContent, etchosts.Record{Hosts: mhost, IP: ip}) } sb.addHostsEntries(extraContent) return nil } func (sb *sandbox) addHostsEntries(recs []etchosts.Record) { if err := etchosts.Add(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed adding service host entries to the running container: %v", err) } } func (sb *sandbox) deleteHostsEntries(recs []etchosts.Record) { if err := etchosts.Delete(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed deleting service host entries to the running container: %v", err) } } func (sb *sandbox) updateParentHosts() error { var pSb Sandbox for _, update := range sb.config.parentUpdates { sb.controller.WalkSandboxes(SandboxContainerWalker(&pSb, update.cid)) if pSb == nil { continue } if err := etchosts.Update(pSb.(*sandbox).config.hostsPath, update.ip, update.name); err != nil { return err } } return nil } func (sb *sandbox) restorePath() { if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } } func (sb *sandbox) setExternalResolvers(content []byte, addrType int, checkLoopback bool) { servers := resolvconf.GetNameservers(content, addrType) for _, ip := range servers { hostLoopback := false if checkLoopback { hostLoopback = dns.IsIPv4Localhost(ip) } sb.extDNS = append(sb.extDNS, extDNSEntry{ IPStr: ip, HostLoopback: hostLoopback, }) } } func (sb *sandbox) setupDNS() error { var newRC *resolvconf.File if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" dir, _ := filepath.Split(sb.config.resolvConfPath) if err := createBasePath(dir); err != nil { return err } // When the user specify a conainter in the host namespace and do no have any dns option specified // we just copy the host resolv.conf from the host itself if sb.config.useDefaultSandBox && len(sb.config.dnsList) == 0 && len(sb.config.dnsSearchList) == 0 && len(sb.config.dnsOptionsList) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originResolvConfPath, sb.config.resolvConfPath); err != nil { if !os.IsNotExist(err) { return fmt.Errorf("could not copy source resolv.conf file %s to %s: %v", sb.config.originResolvConfPath, sb.config.resolvConfPath, err) } logrus.Infof("%s does not exist, we create an empty resolv.conf for container", sb.config.originResolvConfPath) if err := createFile(sb.config.resolvConfPath); err != nil { return err } } return nil } originResolvConfPath := sb.config.originResolvConfPath if originResolvConfPath == "" { // fallback if not specified originResolvConfPath = resolvconf.Path() } currRC, err := resolvconf.GetSpecific(originResolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } // it's ok to continue if /etc/resolv.conf doesn't exist, default resolvers (Google's Public DNS) // will be used currRC = &resolvconf.File{} logrus.Infof("/etc/resolv.conf does not exist") } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { var ( err error dnsList = resolvconf.GetNameservers(currRC.Content, types.IP) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) dnsOptionsList = resolvconf.GetOptions(currRC.Content) ) if len(sb.config.dnsList) > 0 { dnsList = sb.config.dnsList } if len(sb.config.dnsSearchList) > 0 { dnsSearchList = sb.config.dnsSearchList } if len(sb.config.dnsOptionsList) > 0 { dnsOptionsList = sb.config.dnsOptionsList } newRC, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) if err != nil { return err } // After building the resolv.conf from the user config save the // external resolvers in the sandbox. Note that --dns 127.0.0.x // config refers to the loopback in the container namespace sb.setExternalResolvers(newRC.Content, types.IPv4, false) } else { // If the host resolv.conf file has 127.0.0.x container should // use the host resolver for queries. This is supported by the // docker embedded DNS server. Hence save the external resolvers // before filtering it out. sb.setExternalResolvers(currRC.Content, types.IPv4, true) // Replace any localhost/127.* (at this point we have no info about ipv6, pass it as true) if newRC, err = resolvconf.FilterResolvDNS(currRC.Content, true); err != nil { return err } // No contention on container resolv.conf file at sandbox creation if err := ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, filePerm); err != nil { return types.InternalErrorf("failed to write unhaltered resolv.conf file content when setting up dns for sandbox %s: %v", sb.ID(), err) } } // Write hash if err := ioutil.WriteFile(sb.config.resolvConfHashFile, []byte(newRC.Hash), filePerm); err != nil { return types.InternalErrorf("failed to write resolv.conf hash file when setting up dns for sandbox %s: %v", sb.ID(), err) } return nil } func (sb *sandbox) updateDNS(ipv6Enabled bool) error { var ( currHash string hashFile = sb.config.resolvConfHashFile ) // This is for the host mode networking if sb.config.useDefaultSandBox { return nil } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { return nil } currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } } else { h, err := ioutil.ReadFile(hashFile) if err != nil { if !os.IsNotExist(err) { return err } } else { currHash = string(h) } } if currHash != "" && currHash != currRC.Hash { // Seems the user has changed the container resolv.conf since the last time // we checked so return without doing anything. //logrus.Infof("Skipping update of resolv.conf file with ipv6Enabled: %t because file was touched by user", ipv6Enabled) return nil } // replace any localhost/127.* and remove IPv6 nameservers if IPv6 disabled. newRC, err := resolvconf.FilterResolvDNS(currRC.Content, ipv6Enabled) if err != nil { return err } err = ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, 0644) //nolint:gosec // gosec complains about perms here, which must be 0644 in this case if err != nil { return err } // write the new hash in a temp file and rename it to make the update atomic dir := path.Dir(sb.config.resolvConfPath) tmpHashFile, err := ioutil.TempFile(dir, "hash") if err != nil { return err } if err = tmpHashFile.Chmod(filePerm); err != nil { tmpHashFile.Close() return err } _, err = tmpHashFile.Write([]byte(newRC.Hash)) if err1 := tmpHashFile.Close(); err == nil { err = err1 } if err != nil { return err } return os.Rename(tmpHashFile.Name(), hashFile) } // Embedded DNS server has to be enabled for this sandbox. Rebuild the container's // resolv.conf by doing the following // - Add only the embedded server's IP to container's resolv.conf // - If the embedded server needs any resolv.conf options add it to the current list func (sb *sandbox) rebuildDNS() error { currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { return err } if len(sb.extDNS) == 0 { sb.setExternalResolvers(currRC.Content, types.IPv4, false) } var ( dnsList = []string{sb.resolver.NameServer()} dnsOptionsList = resolvconf.GetOptions(currRC.Content) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) ) // external v6 DNS servers has to be listed in resolv.conf dnsList = append(dnsList, resolvconf.GetNameservers(currRC.Content, types.IPv6)...) // If the user config and embedded DNS server both have ndots option set, // remember the user's config so that unqualified names not in the docker // domain can be dropped. resOptions := sb.resolver.ResolverOptions() dnsOpt: for _, resOpt := range resOptions { if strings.Contains(resOpt, "ndots") { for _, option := range dnsOptionsList { if strings.Contains(option, "ndots") { parts := strings.Split(option, ":") if len(parts) != 2 { return fmt.Errorf("invalid ndots option %v", option) } if num, err := strconv.Atoi(parts[1]); err != nil { return fmt.Errorf("invalid number for ndots option: %v", parts[1]) } else if num >= 0 { // if the user sets ndots, use the user setting sb.ndotsSet = true break dnsOpt } else { return fmt.Errorf("invalid number for ndots option: %v", num) } } } } } if !sb.ndotsSet { // if the user did not set the ndots, set it to 0 to prioritize the service name resolution // Ref: https://linux.die.net/man/5/resolv.conf dnsOptionsList = append(dnsOptionsList, resOptions...) } _, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) return err } func createBasePath(dir string) error { return os.MkdirAll(dir, dirPerm) } func createFile(path string) error { var f *os.File dir, _ := filepath.Split(path) err := createBasePath(dir) if err != nil { return err } f, err = os.Create(path) if err == nil { f.Close() } return err } func copyFile(src, dst string) error { sBytes, err := ioutil.ReadFile(src) if err != nil { return err } return ioutil.WriteFile(dst, sBytes, filePerm) }
// +build !windows package libnetwork import ( "fmt" "io/ioutil" "net" "os" "path" "path/filepath" "strconv" "strings" "github.com/docker/docker/libnetwork/etchosts" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/sirupsen/logrus" ) const ( defaultPrefix = "/var/lib/docker/network/files" dirPerm = 0755 filePerm = 0644 ) func (sb *sandbox) startResolver(restore bool) { sb.resolverOnce.Do(func() { var err error sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb) defer func() { if err != nil { sb.resolver = nil } }() // In the case of live restore container is already running with // right resolv.conf contents created before. Just update the // external DNS servers from the restored sandbox for embedded // server to use. if !restore { err = sb.rebuildDNS() if err != nil { logrus.Errorf("Updating resolv.conf failed for container %s, %q", sb.ContainerID(), err) return } } sb.resolver.SetExtServers(sb.extDNS) if err = sb.osSbox.InvokeFunc(sb.resolver.SetupFunc(0)); err != nil { logrus.Errorf("Resolver Setup function failed for container %s, %q", sb.ContainerID(), err) return } if err = sb.resolver.Start(); err != nil { logrus.Errorf("Resolver Start failed for container %s, %q", sb.ContainerID(), err) } }) } func (sb *sandbox) setupResolutionFiles() error { if err := sb.buildHostsFile(); err != nil { return err } if err := sb.updateParentHosts(); err != nil { return err } return sb.setupDNS() } func (sb *sandbox) buildHostsFile() error { if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } dir, _ := filepath.Split(sb.config.hostsPath) if err := createBasePath(dir); err != nil { return err } // This is for the host mode networking if sb.config.useDefaultSandBox && len(sb.config.extraHosts) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originHostsPath, sb.config.hostsPath); err != nil && !os.IsNotExist(err) { return types.InternalErrorf("could not copy source hosts file %s to %s: %v", sb.config.originHostsPath, sb.config.hostsPath, err) } return nil } extraContent := make([]etchosts.Record, 0, len(sb.config.extraHosts)) for _, extraHost := range sb.config.extraHosts { extraContent = append(extraContent, etchosts.Record{Hosts: extraHost.name, IP: extraHost.IP}) } return etchosts.Build(sb.config.hostsPath, "", sb.config.hostName, sb.config.domainName, extraContent) } func (sb *sandbox) updateHostsFile(ifaceIPs []string) error { if len(ifaceIPs) == 0 { return nil } if sb.config.originHostsPath != "" { return nil } // User might have provided a FQDN in hostname or split it across hostname // and domainname. We want the FQDN and the bare hostname. fqdn := sb.config.hostName mhost := sb.config.hostName if sb.config.domainName != "" { fqdn = fmt.Sprintf("%s.%s", fqdn, sb.config.domainName) } parts := strings.SplitN(fqdn, ".", 2) if len(parts) == 2 { mhost = fmt.Sprintf("%s %s", fqdn, parts[0]) } var extraContent []etchosts.Record for _, ip := range ifaceIPs { extraContent = append(extraContent, etchosts.Record{Hosts: mhost, IP: ip}) } sb.addHostsEntries(extraContent) return nil } func (sb *sandbox) addHostsEntries(recs []etchosts.Record) { if err := etchosts.Add(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed adding service host entries to the running container: %v", err) } } func (sb *sandbox) deleteHostsEntries(recs []etchosts.Record) { if err := etchosts.Delete(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed deleting service host entries to the running container: %v", err) } } func (sb *sandbox) updateParentHosts() error { var pSb Sandbox for _, update := range sb.config.parentUpdates { sb.controller.WalkSandboxes(SandboxContainerWalker(&pSb, update.cid)) if pSb == nil { continue } if err := etchosts.Update(pSb.(*sandbox).config.hostsPath, update.ip, update.name); err != nil { return err } } return nil } func (sb *sandbox) restorePath() { if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } } func (sb *sandbox) setExternalResolvers(content []byte, addrType int, checkLoopback bool) { servers := resolvconf.GetNameservers(content, addrType) for _, ip := range servers { hostLoopback := false if checkLoopback && isIPv4Loopback(ip) { hostLoopback = true } sb.extDNS = append(sb.extDNS, extDNSEntry{ IPStr: ip, HostLoopback: hostLoopback, }) } } // isIPv4Loopback checks if the given IP address is an IPv4 loopback address. // It's based on the logic in Go's net.IP.IsLoopback(), but only the IPv4 part: // https://github.com/golang/go/blob/go1.16.6/src/net/ip.go#L120-L126 func isIPv4Loopback(ipAddress string) bool { if ip := net.ParseIP(ipAddress); ip != nil { if ip4 := ip.To4(); ip4 != nil { return ip4[0] == 127 } } return false } func (sb *sandbox) setupDNS() error { var newRC *resolvconf.File if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" dir, _ := filepath.Split(sb.config.resolvConfPath) if err := createBasePath(dir); err != nil { return err } // When the user specify a conainter in the host namespace and do no have any dns option specified // we just copy the host resolv.conf from the host itself if sb.config.useDefaultSandBox && len(sb.config.dnsList) == 0 && len(sb.config.dnsSearchList) == 0 && len(sb.config.dnsOptionsList) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originResolvConfPath, sb.config.resolvConfPath); err != nil { if !os.IsNotExist(err) { return fmt.Errorf("could not copy source resolv.conf file %s to %s: %v", sb.config.originResolvConfPath, sb.config.resolvConfPath, err) } logrus.Infof("%s does not exist, we create an empty resolv.conf for container", sb.config.originResolvConfPath) if err := createFile(sb.config.resolvConfPath); err != nil { return err } } return nil } originResolvConfPath := sb.config.originResolvConfPath if originResolvConfPath == "" { // fallback if not specified originResolvConfPath = resolvconf.Path() } currRC, err := resolvconf.GetSpecific(originResolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } // it's ok to continue if /etc/resolv.conf doesn't exist, default resolvers (Google's Public DNS) // will be used currRC = &resolvconf.File{} logrus.Infof("/etc/resolv.conf does not exist") } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { var ( err error dnsList = resolvconf.GetNameservers(currRC.Content, resolvconf.IP) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) dnsOptionsList = resolvconf.GetOptions(currRC.Content) ) if len(sb.config.dnsList) > 0 { dnsList = sb.config.dnsList } if len(sb.config.dnsSearchList) > 0 { dnsSearchList = sb.config.dnsSearchList } if len(sb.config.dnsOptionsList) > 0 { dnsOptionsList = sb.config.dnsOptionsList } newRC, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) if err != nil { return err } // After building the resolv.conf from the user config save the // external resolvers in the sandbox. Note that --dns 127.0.0.x // config refers to the loopback in the container namespace sb.setExternalResolvers(newRC.Content, resolvconf.IPv4, false) } else { // If the host resolv.conf file has 127.0.0.x container should // use the host resolver for queries. This is supported by the // docker embedded DNS server. Hence save the external resolvers // before filtering it out. sb.setExternalResolvers(currRC.Content, resolvconf.IPv4, true) // Replace any localhost/127.* (at this point we have no info about ipv6, pass it as true) if newRC, err = resolvconf.FilterResolvDNS(currRC.Content, true); err != nil { return err } // No contention on container resolv.conf file at sandbox creation if err := ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, filePerm); err != nil { return types.InternalErrorf("failed to write unhaltered resolv.conf file content when setting up dns for sandbox %s: %v", sb.ID(), err) } } // Write hash if err := ioutil.WriteFile(sb.config.resolvConfHashFile, []byte(newRC.Hash), filePerm); err != nil { return types.InternalErrorf("failed to write resolv.conf hash file when setting up dns for sandbox %s: %v", sb.ID(), err) } return nil } func (sb *sandbox) updateDNS(ipv6Enabled bool) error { var ( currHash string hashFile = sb.config.resolvConfHashFile ) // This is for the host mode networking if sb.config.useDefaultSandBox { return nil } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { return nil } currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } } else { h, err := ioutil.ReadFile(hashFile) if err != nil { if !os.IsNotExist(err) { return err } } else { currHash = string(h) } } if currHash != "" && currHash != currRC.Hash { // Seems the user has changed the container resolv.conf since the last time // we checked so return without doing anything. //logrus.Infof("Skipping update of resolv.conf file with ipv6Enabled: %t because file was touched by user", ipv6Enabled) return nil } // replace any localhost/127.* and remove IPv6 nameservers if IPv6 disabled. newRC, err := resolvconf.FilterResolvDNS(currRC.Content, ipv6Enabled) if err != nil { return err } err = ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, 0644) //nolint:gosec // gosec complains about perms here, which must be 0644 in this case if err != nil { return err } // write the new hash in a temp file and rename it to make the update atomic dir := path.Dir(sb.config.resolvConfPath) tmpHashFile, err := ioutil.TempFile(dir, "hash") if err != nil { return err } if err = tmpHashFile.Chmod(filePerm); err != nil { tmpHashFile.Close() return err } _, err = tmpHashFile.Write([]byte(newRC.Hash)) if err1 := tmpHashFile.Close(); err == nil { err = err1 } if err != nil { return err } return os.Rename(tmpHashFile.Name(), hashFile) } // Embedded DNS server has to be enabled for this sandbox. Rebuild the container's // resolv.conf by doing the following // - Add only the embedded server's IP to container's resolv.conf // - If the embedded server needs any resolv.conf options add it to the current list func (sb *sandbox) rebuildDNS() error { currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { return err } if len(sb.extDNS) == 0 { sb.setExternalResolvers(currRC.Content, resolvconf.IPv4, false) } var ( dnsList = []string{sb.resolver.NameServer()} dnsOptionsList = resolvconf.GetOptions(currRC.Content) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) ) // external v6 DNS servers has to be listed in resolv.conf dnsList = append(dnsList, resolvconf.GetNameservers(currRC.Content, resolvconf.IPv6)...) // If the user config and embedded DNS server both have ndots option set, // remember the user's config so that unqualified names not in the docker // domain can be dropped. resOptions := sb.resolver.ResolverOptions() dnsOpt: for _, resOpt := range resOptions { if strings.Contains(resOpt, "ndots") { for _, option := range dnsOptionsList { if strings.Contains(option, "ndots") { parts := strings.Split(option, ":") if len(parts) != 2 { return fmt.Errorf("invalid ndots option %v", option) } if num, err := strconv.Atoi(parts[1]); err != nil { return fmt.Errorf("invalid number for ndots option: %v", parts[1]) } else if num >= 0 { // if the user sets ndots, use the user setting sb.ndotsSet = true break dnsOpt } else { return fmt.Errorf("invalid number for ndots option: %v", num) } } } } } if !sb.ndotsSet { // if the user did not set the ndots, set it to 0 to prioritize the service name resolution // Ref: https://linux.die.net/man/5/resolv.conf dnsOptionsList = append(dnsOptionsList, resOptions...) } _, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) return err } func createBasePath(dir string) error { return os.MkdirAll(dir, dirPerm) } func createFile(path string) error { var f *os.File dir, _ := filepath.Split(path) err := createBasePath(dir) if err != nil { return err } f, err = os.Create(path) if err == nil { f.Close() } return err } func copyFile(src, dst string) error { sBytes, err := ioutil.ReadFile(src) if err != nil { return err } return ioutil.WriteFile(dst, sBytes, filePerm) }
thaJeztah
b6919cb55320fb726d0012e06db50776aef52813
5ea3e12b63511846ac38787af8bdff89c66cc061
This is a lot less eyebrow raising than the regular expressions above for parsing IP addresses in a language with a pretty decent standard library for parsing IP addresses. 😅 👍
tianon
4,502
moby/moby
42,755
libnetwork: make resolvconf more self-contained
### libnetwork: move resolvconf consts into the resolvconf package This allows using the package without having to import the "types" package, and without having to consume github.com/ishidawataru/sctp. ### libnetwork: remove resolvconf/dns package The IsLocalhost utility was not used, and go's "net" package provides a `IsLoopBack()` check, which can be used in stead of `IsIPv4Localhost`. ### move pkg/ioutils.HashData() to libnetwork/resolvconf It's the only location it's used, so we might as well move it there. This also removes the "crypto/sha256" and "encoding/hex" dependencies from pkg/ioutils.. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-08-18 10:52:13+00:00
2021-08-20 08:05:29+00:00
libnetwork/sandbox_dns_unix.go
// +build !windows package libnetwork import ( "fmt" "io/ioutil" "os" "path" "path/filepath" "strconv" "strings" "github.com/docker/docker/libnetwork/etchosts" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/resolvconf/dns" "github.com/docker/docker/libnetwork/types" "github.com/sirupsen/logrus" ) const ( defaultPrefix = "/var/lib/docker/network/files" dirPerm = 0755 filePerm = 0644 ) func (sb *sandbox) startResolver(restore bool) { sb.resolverOnce.Do(func() { var err error sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb) defer func() { if err != nil { sb.resolver = nil } }() // In the case of live restore container is already running with // right resolv.conf contents created before. Just update the // external DNS servers from the restored sandbox for embedded // server to use. if !restore { err = sb.rebuildDNS() if err != nil { logrus.Errorf("Updating resolv.conf failed for container %s, %q", sb.ContainerID(), err) return } } sb.resolver.SetExtServers(sb.extDNS) if err = sb.osSbox.InvokeFunc(sb.resolver.SetupFunc(0)); err != nil { logrus.Errorf("Resolver Setup function failed for container %s, %q", sb.ContainerID(), err) return } if err = sb.resolver.Start(); err != nil { logrus.Errorf("Resolver Start failed for container %s, %q", sb.ContainerID(), err) } }) } func (sb *sandbox) setupResolutionFiles() error { if err := sb.buildHostsFile(); err != nil { return err } if err := sb.updateParentHosts(); err != nil { return err } return sb.setupDNS() } func (sb *sandbox) buildHostsFile() error { if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } dir, _ := filepath.Split(sb.config.hostsPath) if err := createBasePath(dir); err != nil { return err } // This is for the host mode networking if sb.config.useDefaultSandBox && len(sb.config.extraHosts) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originHostsPath, sb.config.hostsPath); err != nil && !os.IsNotExist(err) { return types.InternalErrorf("could not copy source hosts file %s to %s: %v", sb.config.originHostsPath, sb.config.hostsPath, err) } return nil } extraContent := make([]etchosts.Record, 0, len(sb.config.extraHosts)) for _, extraHost := range sb.config.extraHosts { extraContent = append(extraContent, etchosts.Record{Hosts: extraHost.name, IP: extraHost.IP}) } return etchosts.Build(sb.config.hostsPath, "", sb.config.hostName, sb.config.domainName, extraContent) } func (sb *sandbox) updateHostsFile(ifaceIPs []string) error { if len(ifaceIPs) == 0 { return nil } if sb.config.originHostsPath != "" { return nil } // User might have provided a FQDN in hostname or split it across hostname // and domainname. We want the FQDN and the bare hostname. fqdn := sb.config.hostName mhost := sb.config.hostName if sb.config.domainName != "" { fqdn = fmt.Sprintf("%s.%s", fqdn, sb.config.domainName) } parts := strings.SplitN(fqdn, ".", 2) if len(parts) == 2 { mhost = fmt.Sprintf("%s %s", fqdn, parts[0]) } var extraContent []etchosts.Record for _, ip := range ifaceIPs { extraContent = append(extraContent, etchosts.Record{Hosts: mhost, IP: ip}) } sb.addHostsEntries(extraContent) return nil } func (sb *sandbox) addHostsEntries(recs []etchosts.Record) { if err := etchosts.Add(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed adding service host entries to the running container: %v", err) } } func (sb *sandbox) deleteHostsEntries(recs []etchosts.Record) { if err := etchosts.Delete(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed deleting service host entries to the running container: %v", err) } } func (sb *sandbox) updateParentHosts() error { var pSb Sandbox for _, update := range sb.config.parentUpdates { sb.controller.WalkSandboxes(SandboxContainerWalker(&pSb, update.cid)) if pSb == nil { continue } if err := etchosts.Update(pSb.(*sandbox).config.hostsPath, update.ip, update.name); err != nil { return err } } return nil } func (sb *sandbox) restorePath() { if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } } func (sb *sandbox) setExternalResolvers(content []byte, addrType int, checkLoopback bool) { servers := resolvconf.GetNameservers(content, addrType) for _, ip := range servers { hostLoopback := false if checkLoopback { hostLoopback = dns.IsIPv4Localhost(ip) } sb.extDNS = append(sb.extDNS, extDNSEntry{ IPStr: ip, HostLoopback: hostLoopback, }) } } func (sb *sandbox) setupDNS() error { var newRC *resolvconf.File if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" dir, _ := filepath.Split(sb.config.resolvConfPath) if err := createBasePath(dir); err != nil { return err } // When the user specify a conainter in the host namespace and do no have any dns option specified // we just copy the host resolv.conf from the host itself if sb.config.useDefaultSandBox && len(sb.config.dnsList) == 0 && len(sb.config.dnsSearchList) == 0 && len(sb.config.dnsOptionsList) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originResolvConfPath, sb.config.resolvConfPath); err != nil { if !os.IsNotExist(err) { return fmt.Errorf("could not copy source resolv.conf file %s to %s: %v", sb.config.originResolvConfPath, sb.config.resolvConfPath, err) } logrus.Infof("%s does not exist, we create an empty resolv.conf for container", sb.config.originResolvConfPath) if err := createFile(sb.config.resolvConfPath); err != nil { return err } } return nil } originResolvConfPath := sb.config.originResolvConfPath if originResolvConfPath == "" { // fallback if not specified originResolvConfPath = resolvconf.Path() } currRC, err := resolvconf.GetSpecific(originResolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } // it's ok to continue if /etc/resolv.conf doesn't exist, default resolvers (Google's Public DNS) // will be used currRC = &resolvconf.File{} logrus.Infof("/etc/resolv.conf does not exist") } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { var ( err error dnsList = resolvconf.GetNameservers(currRC.Content, types.IP) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) dnsOptionsList = resolvconf.GetOptions(currRC.Content) ) if len(sb.config.dnsList) > 0 { dnsList = sb.config.dnsList } if len(sb.config.dnsSearchList) > 0 { dnsSearchList = sb.config.dnsSearchList } if len(sb.config.dnsOptionsList) > 0 { dnsOptionsList = sb.config.dnsOptionsList } newRC, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) if err != nil { return err } // After building the resolv.conf from the user config save the // external resolvers in the sandbox. Note that --dns 127.0.0.x // config refers to the loopback in the container namespace sb.setExternalResolvers(newRC.Content, types.IPv4, false) } else { // If the host resolv.conf file has 127.0.0.x container should // use the host resolver for queries. This is supported by the // docker embedded DNS server. Hence save the external resolvers // before filtering it out. sb.setExternalResolvers(currRC.Content, types.IPv4, true) // Replace any localhost/127.* (at this point we have no info about ipv6, pass it as true) if newRC, err = resolvconf.FilterResolvDNS(currRC.Content, true); err != nil { return err } // No contention on container resolv.conf file at sandbox creation if err := ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, filePerm); err != nil { return types.InternalErrorf("failed to write unhaltered resolv.conf file content when setting up dns for sandbox %s: %v", sb.ID(), err) } } // Write hash if err := ioutil.WriteFile(sb.config.resolvConfHashFile, []byte(newRC.Hash), filePerm); err != nil { return types.InternalErrorf("failed to write resolv.conf hash file when setting up dns for sandbox %s: %v", sb.ID(), err) } return nil } func (sb *sandbox) updateDNS(ipv6Enabled bool) error { var ( currHash string hashFile = sb.config.resolvConfHashFile ) // This is for the host mode networking if sb.config.useDefaultSandBox { return nil } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { return nil } currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } } else { h, err := ioutil.ReadFile(hashFile) if err != nil { if !os.IsNotExist(err) { return err } } else { currHash = string(h) } } if currHash != "" && currHash != currRC.Hash { // Seems the user has changed the container resolv.conf since the last time // we checked so return without doing anything. //logrus.Infof("Skipping update of resolv.conf file with ipv6Enabled: %t because file was touched by user", ipv6Enabled) return nil } // replace any localhost/127.* and remove IPv6 nameservers if IPv6 disabled. newRC, err := resolvconf.FilterResolvDNS(currRC.Content, ipv6Enabled) if err != nil { return err } err = ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, 0644) //nolint:gosec // gosec complains about perms here, which must be 0644 in this case if err != nil { return err } // write the new hash in a temp file and rename it to make the update atomic dir := path.Dir(sb.config.resolvConfPath) tmpHashFile, err := ioutil.TempFile(dir, "hash") if err != nil { return err } if err = tmpHashFile.Chmod(filePerm); err != nil { tmpHashFile.Close() return err } _, err = tmpHashFile.Write([]byte(newRC.Hash)) if err1 := tmpHashFile.Close(); err == nil { err = err1 } if err != nil { return err } return os.Rename(tmpHashFile.Name(), hashFile) } // Embedded DNS server has to be enabled for this sandbox. Rebuild the container's // resolv.conf by doing the following // - Add only the embedded server's IP to container's resolv.conf // - If the embedded server needs any resolv.conf options add it to the current list func (sb *sandbox) rebuildDNS() error { currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { return err } if len(sb.extDNS) == 0 { sb.setExternalResolvers(currRC.Content, types.IPv4, false) } var ( dnsList = []string{sb.resolver.NameServer()} dnsOptionsList = resolvconf.GetOptions(currRC.Content) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) ) // external v6 DNS servers has to be listed in resolv.conf dnsList = append(dnsList, resolvconf.GetNameservers(currRC.Content, types.IPv6)...) // If the user config and embedded DNS server both have ndots option set, // remember the user's config so that unqualified names not in the docker // domain can be dropped. resOptions := sb.resolver.ResolverOptions() dnsOpt: for _, resOpt := range resOptions { if strings.Contains(resOpt, "ndots") { for _, option := range dnsOptionsList { if strings.Contains(option, "ndots") { parts := strings.Split(option, ":") if len(parts) != 2 { return fmt.Errorf("invalid ndots option %v", option) } if num, err := strconv.Atoi(parts[1]); err != nil { return fmt.Errorf("invalid number for ndots option: %v", parts[1]) } else if num >= 0 { // if the user sets ndots, use the user setting sb.ndotsSet = true break dnsOpt } else { return fmt.Errorf("invalid number for ndots option: %v", num) } } } } } if !sb.ndotsSet { // if the user did not set the ndots, set it to 0 to prioritize the service name resolution // Ref: https://linux.die.net/man/5/resolv.conf dnsOptionsList = append(dnsOptionsList, resOptions...) } _, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) return err } func createBasePath(dir string) error { return os.MkdirAll(dir, dirPerm) } func createFile(path string) error { var f *os.File dir, _ := filepath.Split(path) err := createBasePath(dir) if err != nil { return err } f, err = os.Create(path) if err == nil { f.Close() } return err } func copyFile(src, dst string) error { sBytes, err := ioutil.ReadFile(src) if err != nil { return err } return ioutil.WriteFile(dst, sBytes, filePerm) }
// +build !windows package libnetwork import ( "fmt" "io/ioutil" "net" "os" "path" "path/filepath" "strconv" "strings" "github.com/docker/docker/libnetwork/etchosts" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/sirupsen/logrus" ) const ( defaultPrefix = "/var/lib/docker/network/files" dirPerm = 0755 filePerm = 0644 ) func (sb *sandbox) startResolver(restore bool) { sb.resolverOnce.Do(func() { var err error sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb) defer func() { if err != nil { sb.resolver = nil } }() // In the case of live restore container is already running with // right resolv.conf contents created before. Just update the // external DNS servers from the restored sandbox for embedded // server to use. if !restore { err = sb.rebuildDNS() if err != nil { logrus.Errorf("Updating resolv.conf failed for container %s, %q", sb.ContainerID(), err) return } } sb.resolver.SetExtServers(sb.extDNS) if err = sb.osSbox.InvokeFunc(sb.resolver.SetupFunc(0)); err != nil { logrus.Errorf("Resolver Setup function failed for container %s, %q", sb.ContainerID(), err) return } if err = sb.resolver.Start(); err != nil { logrus.Errorf("Resolver Start failed for container %s, %q", sb.ContainerID(), err) } }) } func (sb *sandbox) setupResolutionFiles() error { if err := sb.buildHostsFile(); err != nil { return err } if err := sb.updateParentHosts(); err != nil { return err } return sb.setupDNS() } func (sb *sandbox) buildHostsFile() error { if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } dir, _ := filepath.Split(sb.config.hostsPath) if err := createBasePath(dir); err != nil { return err } // This is for the host mode networking if sb.config.useDefaultSandBox && len(sb.config.extraHosts) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originHostsPath, sb.config.hostsPath); err != nil && !os.IsNotExist(err) { return types.InternalErrorf("could not copy source hosts file %s to %s: %v", sb.config.originHostsPath, sb.config.hostsPath, err) } return nil } extraContent := make([]etchosts.Record, 0, len(sb.config.extraHosts)) for _, extraHost := range sb.config.extraHosts { extraContent = append(extraContent, etchosts.Record{Hosts: extraHost.name, IP: extraHost.IP}) } return etchosts.Build(sb.config.hostsPath, "", sb.config.hostName, sb.config.domainName, extraContent) } func (sb *sandbox) updateHostsFile(ifaceIPs []string) error { if len(ifaceIPs) == 0 { return nil } if sb.config.originHostsPath != "" { return nil } // User might have provided a FQDN in hostname or split it across hostname // and domainname. We want the FQDN and the bare hostname. fqdn := sb.config.hostName mhost := sb.config.hostName if sb.config.domainName != "" { fqdn = fmt.Sprintf("%s.%s", fqdn, sb.config.domainName) } parts := strings.SplitN(fqdn, ".", 2) if len(parts) == 2 { mhost = fmt.Sprintf("%s %s", fqdn, parts[0]) } var extraContent []etchosts.Record for _, ip := range ifaceIPs { extraContent = append(extraContent, etchosts.Record{Hosts: mhost, IP: ip}) } sb.addHostsEntries(extraContent) return nil } func (sb *sandbox) addHostsEntries(recs []etchosts.Record) { if err := etchosts.Add(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed adding service host entries to the running container: %v", err) } } func (sb *sandbox) deleteHostsEntries(recs []etchosts.Record) { if err := etchosts.Delete(sb.config.hostsPath, recs); err != nil { logrus.Warnf("Failed deleting service host entries to the running container: %v", err) } } func (sb *sandbox) updateParentHosts() error { var pSb Sandbox for _, update := range sb.config.parentUpdates { sb.controller.WalkSandboxes(SandboxContainerWalker(&pSb, update.cid)) if pSb == nil { continue } if err := etchosts.Update(pSb.(*sandbox).config.hostsPath, update.ip, update.name); err != nil { return err } } return nil } func (sb *sandbox) restorePath() { if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" if sb.config.hostsPath == "" { sb.config.hostsPath = defaultPrefix + "/" + sb.id + "/hosts" } } func (sb *sandbox) setExternalResolvers(content []byte, addrType int, checkLoopback bool) { servers := resolvconf.GetNameservers(content, addrType) for _, ip := range servers { hostLoopback := false if checkLoopback && isIPv4Loopback(ip) { hostLoopback = true } sb.extDNS = append(sb.extDNS, extDNSEntry{ IPStr: ip, HostLoopback: hostLoopback, }) } } // isIPv4Loopback checks if the given IP address is an IPv4 loopback address. // It's based on the logic in Go's net.IP.IsLoopback(), but only the IPv4 part: // https://github.com/golang/go/blob/go1.16.6/src/net/ip.go#L120-L126 func isIPv4Loopback(ipAddress string) bool { if ip := net.ParseIP(ipAddress); ip != nil { if ip4 := ip.To4(); ip4 != nil { return ip4[0] == 127 } } return false } func (sb *sandbox) setupDNS() error { var newRC *resolvconf.File if sb.config.resolvConfPath == "" { sb.config.resolvConfPath = defaultPrefix + "/" + sb.id + "/resolv.conf" } sb.config.resolvConfHashFile = sb.config.resolvConfPath + ".hash" dir, _ := filepath.Split(sb.config.resolvConfPath) if err := createBasePath(dir); err != nil { return err } // When the user specify a conainter in the host namespace and do no have any dns option specified // we just copy the host resolv.conf from the host itself if sb.config.useDefaultSandBox && len(sb.config.dnsList) == 0 && len(sb.config.dnsSearchList) == 0 && len(sb.config.dnsOptionsList) == 0 { // We are working under the assumption that the origin file option had been properly expressed by the upper layer // if not here we are going to error out if err := copyFile(sb.config.originResolvConfPath, sb.config.resolvConfPath); err != nil { if !os.IsNotExist(err) { return fmt.Errorf("could not copy source resolv.conf file %s to %s: %v", sb.config.originResolvConfPath, sb.config.resolvConfPath, err) } logrus.Infof("%s does not exist, we create an empty resolv.conf for container", sb.config.originResolvConfPath) if err := createFile(sb.config.resolvConfPath); err != nil { return err } } return nil } originResolvConfPath := sb.config.originResolvConfPath if originResolvConfPath == "" { // fallback if not specified originResolvConfPath = resolvconf.Path() } currRC, err := resolvconf.GetSpecific(originResolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } // it's ok to continue if /etc/resolv.conf doesn't exist, default resolvers (Google's Public DNS) // will be used currRC = &resolvconf.File{} logrus.Infof("/etc/resolv.conf does not exist") } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { var ( err error dnsList = resolvconf.GetNameservers(currRC.Content, resolvconf.IP) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) dnsOptionsList = resolvconf.GetOptions(currRC.Content) ) if len(sb.config.dnsList) > 0 { dnsList = sb.config.dnsList } if len(sb.config.dnsSearchList) > 0 { dnsSearchList = sb.config.dnsSearchList } if len(sb.config.dnsOptionsList) > 0 { dnsOptionsList = sb.config.dnsOptionsList } newRC, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) if err != nil { return err } // After building the resolv.conf from the user config save the // external resolvers in the sandbox. Note that --dns 127.0.0.x // config refers to the loopback in the container namespace sb.setExternalResolvers(newRC.Content, resolvconf.IPv4, false) } else { // If the host resolv.conf file has 127.0.0.x container should // use the host resolver for queries. This is supported by the // docker embedded DNS server. Hence save the external resolvers // before filtering it out. sb.setExternalResolvers(currRC.Content, resolvconf.IPv4, true) // Replace any localhost/127.* (at this point we have no info about ipv6, pass it as true) if newRC, err = resolvconf.FilterResolvDNS(currRC.Content, true); err != nil { return err } // No contention on container resolv.conf file at sandbox creation if err := ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, filePerm); err != nil { return types.InternalErrorf("failed to write unhaltered resolv.conf file content when setting up dns for sandbox %s: %v", sb.ID(), err) } } // Write hash if err := ioutil.WriteFile(sb.config.resolvConfHashFile, []byte(newRC.Hash), filePerm); err != nil { return types.InternalErrorf("failed to write resolv.conf hash file when setting up dns for sandbox %s: %v", sb.ID(), err) } return nil } func (sb *sandbox) updateDNS(ipv6Enabled bool) error { var ( currHash string hashFile = sb.config.resolvConfHashFile ) // This is for the host mode networking if sb.config.useDefaultSandBox { return nil } if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 { return nil } currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { if !os.IsNotExist(err) { return err } } else { h, err := ioutil.ReadFile(hashFile) if err != nil { if !os.IsNotExist(err) { return err } } else { currHash = string(h) } } if currHash != "" && currHash != currRC.Hash { // Seems the user has changed the container resolv.conf since the last time // we checked so return without doing anything. //logrus.Infof("Skipping update of resolv.conf file with ipv6Enabled: %t because file was touched by user", ipv6Enabled) return nil } // replace any localhost/127.* and remove IPv6 nameservers if IPv6 disabled. newRC, err := resolvconf.FilterResolvDNS(currRC.Content, ipv6Enabled) if err != nil { return err } err = ioutil.WriteFile(sb.config.resolvConfPath, newRC.Content, 0644) //nolint:gosec // gosec complains about perms here, which must be 0644 in this case if err != nil { return err } // write the new hash in a temp file and rename it to make the update atomic dir := path.Dir(sb.config.resolvConfPath) tmpHashFile, err := ioutil.TempFile(dir, "hash") if err != nil { return err } if err = tmpHashFile.Chmod(filePerm); err != nil { tmpHashFile.Close() return err } _, err = tmpHashFile.Write([]byte(newRC.Hash)) if err1 := tmpHashFile.Close(); err == nil { err = err1 } if err != nil { return err } return os.Rename(tmpHashFile.Name(), hashFile) } // Embedded DNS server has to be enabled for this sandbox. Rebuild the container's // resolv.conf by doing the following // - Add only the embedded server's IP to container's resolv.conf // - If the embedded server needs any resolv.conf options add it to the current list func (sb *sandbox) rebuildDNS() error { currRC, err := resolvconf.GetSpecific(sb.config.resolvConfPath) if err != nil { return err } if len(sb.extDNS) == 0 { sb.setExternalResolvers(currRC.Content, resolvconf.IPv4, false) } var ( dnsList = []string{sb.resolver.NameServer()} dnsOptionsList = resolvconf.GetOptions(currRC.Content) dnsSearchList = resolvconf.GetSearchDomains(currRC.Content) ) // external v6 DNS servers has to be listed in resolv.conf dnsList = append(dnsList, resolvconf.GetNameservers(currRC.Content, resolvconf.IPv6)...) // If the user config and embedded DNS server both have ndots option set, // remember the user's config so that unqualified names not in the docker // domain can be dropped. resOptions := sb.resolver.ResolverOptions() dnsOpt: for _, resOpt := range resOptions { if strings.Contains(resOpt, "ndots") { for _, option := range dnsOptionsList { if strings.Contains(option, "ndots") { parts := strings.Split(option, ":") if len(parts) != 2 { return fmt.Errorf("invalid ndots option %v", option) } if num, err := strconv.Atoi(parts[1]); err != nil { return fmt.Errorf("invalid number for ndots option: %v", parts[1]) } else if num >= 0 { // if the user sets ndots, use the user setting sb.ndotsSet = true break dnsOpt } else { return fmt.Errorf("invalid number for ndots option: %v", num) } } } } } if !sb.ndotsSet { // if the user did not set the ndots, set it to 0 to prioritize the service name resolution // Ref: https://linux.die.net/man/5/resolv.conf dnsOptionsList = append(dnsOptionsList, resOptions...) } _, err = resolvconf.Build(sb.config.resolvConfPath, dnsList, dnsSearchList, dnsOptionsList) return err } func createBasePath(dir string) error { return os.MkdirAll(dir, dirPerm) } func createFile(path string) error { var f *os.File dir, _ := filepath.Split(path) err := createBasePath(dir) if err != nil { return err } f, err = os.Create(path) if err == nil { f.Close() } return err } func copyFile(src, dst string) error { sBytes, err := ioutil.ReadFile(src) if err != nil { return err } return ioutil.WriteFile(dst, sBytes, filePerm) }
thaJeztah
b6919cb55320fb726d0012e06db50776aef52813
5ea3e12b63511846ac38787af8bdff89c66cc061
Ha! True. Yes, I didn't compare performance between this and the original approach, but I assume the stdlib code is usually optimised.
thaJeztah
4,503
moby/moby
42,755
libnetwork: make resolvconf more self-contained
### libnetwork: move resolvconf consts into the resolvconf package This allows using the package without having to import the "types" package, and without having to consume github.com/ishidawataru/sctp. ### libnetwork: remove resolvconf/dns package The IsLocalhost utility was not used, and go's "net" package provides a `IsLoopBack()` check, which can be used in stead of `IsIPv4Localhost`. ### move pkg/ioutils.HashData() to libnetwork/resolvconf It's the only location it's used, so we might as well move it there. This also removes the "crypto/sha256" and "encoding/hex" dependencies from pkg/ioutils.. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-08-18 10:52:13+00:00
2021-08-20 08:05:29+00:00
pkg/ioutils/readers.go
package ioutils // import "github.com/docker/docker/pkg/ioutils" import ( "context" "crypto/sha256" "encoding/hex" "io" ) // ReadCloserWrapper wraps an io.Reader, and implements an io.ReadCloser // It calls the given callback function when closed. It should be constructed // with NewReadCloserWrapper type ReadCloserWrapper struct { io.Reader closer func() error } // Close calls back the passed closer function func (r *ReadCloserWrapper) Close() error { return r.closer() } // NewReadCloserWrapper returns a new io.ReadCloser. func NewReadCloserWrapper(r io.Reader, closer func() error) io.ReadCloser { return &ReadCloserWrapper{ Reader: r, closer: closer, } } type readerErrWrapper struct { reader io.Reader closer func() } func (r *readerErrWrapper) Read(p []byte) (int, error) { n, err := r.reader.Read(p) if err != nil { r.closer() } return n, err } // NewReaderErrWrapper returns a new io.Reader. func NewReaderErrWrapper(r io.Reader, closer func()) io.Reader { return &readerErrWrapper{ reader: r, closer: closer, } } // HashData returns the sha256 sum of src. func HashData(src io.Reader) (string, error) { h := sha256.New() if _, err := io.Copy(h, src); err != nil { return "", err } return "sha256:" + hex.EncodeToString(h.Sum(nil)), nil } // OnEOFReader wraps an io.ReadCloser and a function // the function will run at the end of file or close the file. type OnEOFReader struct { Rc io.ReadCloser Fn func() } func (r *OnEOFReader) Read(p []byte) (n int, err error) { n, err = r.Rc.Read(p) if err == io.EOF { r.runFunc() } return } // Close closes the file and run the function. func (r *OnEOFReader) Close() error { err := r.Rc.Close() r.runFunc() return err } func (r *OnEOFReader) runFunc() { if fn := r.Fn; fn != nil { fn() r.Fn = nil } } // cancelReadCloser wraps an io.ReadCloser with a context for cancelling read // operations. type cancelReadCloser struct { cancel func() pR *io.PipeReader // Stream to read from pW *io.PipeWriter } // NewCancelReadCloser creates a wrapper that closes the ReadCloser when the // context is cancelled. The returned io.ReadCloser must be closed when it is // no longer needed. func NewCancelReadCloser(ctx context.Context, in io.ReadCloser) io.ReadCloser { pR, pW := io.Pipe() // Create a context used to signal when the pipe is closed doneCtx, cancel := context.WithCancel(context.Background()) p := &cancelReadCloser{ cancel: cancel, pR: pR, pW: pW, } go func() { _, err := io.Copy(pW, in) select { case <-ctx.Done(): // If the context was closed, p.closeWithError // was already called. Calling it again would // change the error that Read returns. default: p.closeWithError(err) } in.Close() }() go func() { for { select { case <-ctx.Done(): p.closeWithError(ctx.Err()) case <-doneCtx.Done(): return } } }() return p } // Read wraps the Read method of the pipe that provides data from the wrapped // ReadCloser. func (p *cancelReadCloser) Read(buf []byte) (n int, err error) { return p.pR.Read(buf) } // closeWithError closes the wrapper and its underlying reader. It will // cause future calls to Read to return err. func (p *cancelReadCloser) closeWithError(err error) { p.pW.CloseWithError(err) p.cancel() } // Close closes the wrapper its underlying reader. It will cause // future calls to Read to return io.EOF. func (p *cancelReadCloser) Close() error { p.closeWithError(io.EOF) return nil }
package ioutils // import "github.com/docker/docker/pkg/ioutils" import ( "context" "io" // make sure crypto.SHA256, crypto.sha512 and crypto.SHA384 are registered // TODO remove once https://github.com/opencontainers/go-digest/pull/64 is merged. _ "crypto/sha256" _ "crypto/sha512" ) // ReadCloserWrapper wraps an io.Reader, and implements an io.ReadCloser // It calls the given callback function when closed. It should be constructed // with NewReadCloserWrapper type ReadCloserWrapper struct { io.Reader closer func() error } // Close calls back the passed closer function func (r *ReadCloserWrapper) Close() error { return r.closer() } // NewReadCloserWrapper returns a new io.ReadCloser. func NewReadCloserWrapper(r io.Reader, closer func() error) io.ReadCloser { return &ReadCloserWrapper{ Reader: r, closer: closer, } } type readerErrWrapper struct { reader io.Reader closer func() } func (r *readerErrWrapper) Read(p []byte) (int, error) { n, err := r.reader.Read(p) if err != nil { r.closer() } return n, err } // NewReaderErrWrapper returns a new io.Reader. func NewReaderErrWrapper(r io.Reader, closer func()) io.Reader { return &readerErrWrapper{ reader: r, closer: closer, } } // OnEOFReader wraps an io.ReadCloser and a function // the function will run at the end of file or close the file. type OnEOFReader struct { Rc io.ReadCloser Fn func() } func (r *OnEOFReader) Read(p []byte) (n int, err error) { n, err = r.Rc.Read(p) if err == io.EOF { r.runFunc() } return } // Close closes the file and run the function. func (r *OnEOFReader) Close() error { err := r.Rc.Close() r.runFunc() return err } func (r *OnEOFReader) runFunc() { if fn := r.Fn; fn != nil { fn() r.Fn = nil } } // cancelReadCloser wraps an io.ReadCloser with a context for cancelling read // operations. type cancelReadCloser struct { cancel func() pR *io.PipeReader // Stream to read from pW *io.PipeWriter } // NewCancelReadCloser creates a wrapper that closes the ReadCloser when the // context is cancelled. The returned io.ReadCloser must be closed when it is // no longer needed. func NewCancelReadCloser(ctx context.Context, in io.ReadCloser) io.ReadCloser { pR, pW := io.Pipe() // Create a context used to signal when the pipe is closed doneCtx, cancel := context.WithCancel(context.Background()) p := &cancelReadCloser{ cancel: cancel, pR: pR, pW: pW, } go func() { _, err := io.Copy(pW, in) select { case <-ctx.Done(): // If the context was closed, p.closeWithError // was already called. Calling it again would // change the error that Read returns. default: p.closeWithError(err) } in.Close() }() go func() { for { select { case <-ctx.Done(): p.closeWithError(ctx.Err()) case <-doneCtx.Done(): return } } }() return p } // Read wraps the Read method of the pipe that provides data from the wrapped // ReadCloser. func (p *cancelReadCloser) Read(buf []byte) (n int, err error) { return p.pR.Read(buf) } // closeWithError closes the wrapper and its underlying reader. It will // cause future calls to Read to return err. func (p *cancelReadCloser) closeWithError(err error) { p.pW.CloseWithError(err) p.cancel() } // Close closes the wrapper its underlying reader. It will cause // future calls to Read to return io.EOF. func (p *cancelReadCloser) Close() error { p.closeWithError(io.EOF) return nil }
thaJeztah
b6919cb55320fb726d0012e06db50776aef52813
5ea3e12b63511846ac38787af8bdff89c66cc061
Included a temporary fix here (until https://github.com/opencontainers/go-digest/pull/64 is merged)
thaJeztah
4,504
moby/moby
42,725
runconfig: decodeContainerConfig() return early if there's no HostConfig
Each of the validation functions depended on HostConfig being not `nil`. Use an early return, instead of continuing, and checking if it's `nil` in each of the validate functions. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-08-09 09:18:31+00:00
2021-10-27 11:15:15+00:00
runconfig/config.go
package runconfig // import "github.com/docker/docker/runconfig" import ( "encoding/json" "io" "github.com/docker/docker/api/types/container" networktypes "github.com/docker/docker/api/types/network" "github.com/docker/docker/pkg/sysinfo" ) // ContainerDecoder implements httputils.ContainerDecoder // calling DecodeContainerConfig. type ContainerDecoder struct { GetSysInfo func() *sysinfo.SysInfo } // DecodeConfig makes ContainerDecoder to implement httputils.ContainerDecoder func (r ContainerDecoder) DecodeConfig(src io.Reader) (*container.Config, *container.HostConfig, *networktypes.NetworkingConfig, error) { var si *sysinfo.SysInfo if r.GetSysInfo != nil { si = r.GetSysInfo() } else { si = sysinfo.New() } return decodeContainerConfig(src, si) } // DecodeHostConfig makes ContainerDecoder to implement httputils.ContainerDecoder func (r ContainerDecoder) DecodeHostConfig(src io.Reader) (*container.HostConfig, error) { return decodeHostConfig(src) } // decodeContainerConfig decodes a json encoded config into a ContainerConfigWrapper // struct and returns both a Config and a HostConfig struct // Be aware this function is not checking whether the resulted structs are nil, // it's your business to do so func decodeContainerConfig(src io.Reader, si *sysinfo.SysInfo) (*container.Config, *container.HostConfig, *networktypes.NetworkingConfig, error) { var w ContainerConfigWrapper decoder := json.NewDecoder(src) if err := decoder.Decode(&w); err != nil { return nil, nil, nil, err } hc := w.getHostConfig() // Perform platform-specific processing of Volumes and Binds. if w.Config != nil && hc != nil { // Initialize the volumes map if currently nil if w.Config.Volumes == nil { w.Config.Volumes = make(map[string]struct{}) } } // Certain parameters need daemon-side validation that cannot be done // on the client, as only the daemon knows what is valid for the platform. if err := validateNetMode(w.Config, hc); err != nil { return nil, nil, nil, err } // Validate isolation if err := validateIsolation(hc); err != nil { return nil, nil, nil, err } // Validate QoS if err := validateQoS(hc); err != nil { return nil, nil, nil, err } // Validate Resources if err := validateResources(hc, si); err != nil { return nil, nil, nil, err } // Validate Privileged if err := validatePrivileged(hc); err != nil { return nil, nil, nil, err } // Validate ReadonlyRootfs if err := validateReadonlyRootfs(hc); err != nil { return nil, nil, nil, err } return w.Config, hc, w.NetworkingConfig, nil }
package runconfig // import "github.com/docker/docker/runconfig" import ( "encoding/json" "io" "github.com/docker/docker/api/types/container" networktypes "github.com/docker/docker/api/types/network" "github.com/docker/docker/pkg/sysinfo" ) // ContainerDecoder implements httputils.ContainerDecoder // calling DecodeContainerConfig. type ContainerDecoder struct { GetSysInfo func() *sysinfo.SysInfo } // DecodeConfig makes ContainerDecoder to implement httputils.ContainerDecoder func (r ContainerDecoder) DecodeConfig(src io.Reader) (*container.Config, *container.HostConfig, *networktypes.NetworkingConfig, error) { var si *sysinfo.SysInfo if r.GetSysInfo != nil { si = r.GetSysInfo() } else { si = sysinfo.New() } return decodeContainerConfig(src, si) } // DecodeHostConfig makes ContainerDecoder to implement httputils.ContainerDecoder func (r ContainerDecoder) DecodeHostConfig(src io.Reader) (*container.HostConfig, error) { return decodeHostConfig(src) } // decodeContainerConfig decodes a json encoded config into a ContainerConfigWrapper // struct and returns both a Config and a HostConfig struct, and performs some // validation. Certain parameters need daemon-side validation that cannot be done // on the client, as only the daemon knows what is valid for the platform. // Be aware this function is not checking whether the resulted structs are nil, // it's your business to do so func decodeContainerConfig(src io.Reader, si *sysinfo.SysInfo) (*container.Config, *container.HostConfig, *networktypes.NetworkingConfig, error) { var w ContainerConfigWrapper if err := json.NewDecoder(src).Decode(&w); err != nil { return nil, nil, nil, err } hc := w.getHostConfig() if hc == nil { // We may not be passed a host config, such as in the case of docker commit return w.Config, hc, w.NetworkingConfig, nil } if err := validateNetMode(w.Config, hc); err != nil { return nil, nil, nil, err } if err := validateIsolation(hc); err != nil { return nil, nil, nil, err } if err := validateQoS(hc); err != nil { return nil, nil, nil, err } if err := validateResources(hc, si); err != nil { return nil, nil, nil, err } if err := validatePrivileged(hc); err != nil { return nil, nil, nil, err } if err := validateReadonlyRootfs(hc); err != nil { return nil, nil, nil, err } if w.Config != nil && w.Config.Volumes == nil { w.Config.Volumes = make(map[string]struct{}) } return w.Config, hc, w.NetworkingConfig, nil }
thaJeztah
aef8e4817251278524ed34a41cb456f5c7cf519b
02016803f02e8a0145e7706cfbd21c8b59e3c28d
This one receives `w.Config`, but doesn't appear to ever reference `w.Config.Volumes` and I don't see a reason why it would really ever need to (just to hopefully help make it clear here on the PR why it should be safe to move `w.Config.Volumes = make(map[string]struct{})` even though this receives `w.Config` :smile:).
tianon
4,505
moby/moby
42,717
pkg/signal: remove DefaultStopSignal const, and un-export container.DefaultStopTimeout
This const was previously living in pkg/signal, but with that package being moved to its own module, it didn't make much sense to put docker's defaults in a generic module. The const from the "signal" package is currenlty used *both* by the CLI and the daemon as a default value when creating containers. This put up some questions: - a. should the default be non-exported, and private to the container package? After all, it's a _default_ (so should be used if _NOT_ set). - b. should the client actually setting a default, or instead just omit the value, unless specified by the user? having the client set a default also means that the daemon cannot change the default value because the client (or older clients) will override it. - c. consider defaults from the client and defaults of the daemon to be separate things, and create a default const in the CLI. This patch implements option "a" (option "b" will be done separately, as it involves the CLI code). This still leaves "c" open as an option, if the CLI wants to set its own default. Unfortunately, this change means we'll have to drop the alias for the deprecated pkg/signal.DefaultStopSignal const, but a comment was left instead, which can assist consumers of the const to find why it's no longer there (a search showed the Docker CLI as the only consumer though).
null
2021-08-06 17:04:08+00:00
2021-08-24 16:33:22+00:00
container/container_windows.go
package container // import "github.com/docker/docker/container" import ( "fmt" "os" "path/filepath" "github.com/docker/docker/api/types" containertypes "github.com/docker/docker/api/types/container" swarmtypes "github.com/docker/docker/api/types/swarm" "github.com/docker/docker/pkg/system" ) const ( containerConfigMountPath = `C:\` containerSecretMountPath = `C:\ProgramData\Docker\secrets` containerInternalSecretMountPath = `C:\ProgramData\Docker\internal\secrets` containerInternalConfigsDirPath = `C:\ProgramData\Docker\internal\configs` // DefaultStopTimeout is the timeout (in seconds) for the shutdown call on a container DefaultStopTimeout = 30 ) // UnmountIpcMount unmounts Ipc related mounts. // This is a NOOP on windows. func (container *Container) UnmountIpcMount() error { return nil } // IpcMounts returns the list of Ipc related mounts. func (container *Container) IpcMounts() []Mount { return nil } // CreateSecretSymlinks creates symlinks to files in the secret mount. func (container *Container) CreateSecretSymlinks() error { for _, r := range container.SecretReferences { if r.File == nil { continue } resolvedPath, _, err := container.ResolvePath(getSecretTargetPath(r)) if err != nil { return err } if err := system.MkdirAll(filepath.Dir(resolvedPath), 0); err != nil { return err } if err := os.Symlink(filepath.Join(containerInternalSecretMountPath, r.SecretID), resolvedPath); err != nil { return err } } return nil } // SecretMounts returns the mount for the secret path. // All secrets are stored in a single mount on Windows. Target symlinks are // created for each secret, pointing to the files in this mount. func (container *Container) SecretMounts() ([]Mount, error) { var mounts []Mount if len(container.SecretReferences) > 0 { src, err := container.SecretMountPath() if err != nil { return nil, err } mounts = append(mounts, Mount{ Source: src, Destination: containerInternalSecretMountPath, Writable: false, }) } return mounts, nil } // UnmountSecrets unmounts the fs for secrets func (container *Container) UnmountSecrets() error { p, err := container.SecretMountPath() if err != nil { return err } return os.RemoveAll(p) } // CreateConfigSymlinks creates symlinks to files in the config mount. func (container *Container) CreateConfigSymlinks() error { for _, configRef := range container.ConfigReferences { if configRef.File == nil { continue } resolvedPath, _, err := container.ResolvePath(getConfigTargetPath(configRef)) if err != nil { return err } if err := system.MkdirAll(filepath.Dir(resolvedPath), 0); err != nil { return err } if err := os.Symlink(filepath.Join(containerInternalConfigsDirPath, configRef.ConfigID), resolvedPath); err != nil { return err } } return nil } // ConfigMounts returns the mount for configs. // TODO: Right now Windows doesn't really have a "secure" storage for secrets, // however some configs may contain secrets. Once secure storage is worked out, // configs and secret handling should be merged. func (container *Container) ConfigMounts() []Mount { var mounts []Mount if len(container.ConfigReferences) > 0 { mounts = append(mounts, Mount{ Source: container.ConfigsDirPath(), Destination: containerInternalConfigsDirPath, Writable: false, }) } return mounts } // DetachAndUnmount unmounts all volumes. // On Windows it only delegates to `UnmountVolumes` since there is nothing to // force unmount. func (container *Container) DetachAndUnmount(volumeEventLog func(name, action string, attributes map[string]string)) error { return container.UnmountVolumes(volumeEventLog) } // TmpfsMounts returns the list of tmpfs mounts func (container *Container) TmpfsMounts() ([]Mount, error) { var mounts []Mount return mounts, nil } // UpdateContainer updates configuration of a container. Callers must hold a Lock on the Container. func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfig) error { resources := hostConfig.Resources if resources.CPUShares != 0 || resources.Memory != 0 || resources.NanoCPUs != 0 || resources.CgroupParent != "" || resources.BlkioWeight != 0 || len(resources.BlkioWeightDevice) != 0 || len(resources.BlkioDeviceReadBps) != 0 || len(resources.BlkioDeviceWriteBps) != 0 || len(resources.BlkioDeviceReadIOps) != 0 || len(resources.BlkioDeviceWriteIOps) != 0 || resources.CPUPeriod != 0 || resources.CPUQuota != 0 || resources.CPURealtimePeriod != 0 || resources.CPURealtimeRuntime != 0 || resources.CpusetCpus != "" || resources.CpusetMems != "" || len(resources.Devices) != 0 || len(resources.DeviceCgroupRules) != 0 || resources.KernelMemory != 0 || resources.MemoryReservation != 0 || resources.MemorySwap != 0 || resources.MemorySwappiness != nil || resources.OomKillDisable != nil || (resources.PidsLimit != nil && *resources.PidsLimit != 0) || len(resources.Ulimits) != 0 || resources.CPUCount != 0 || resources.CPUPercent != 0 || resources.IOMaximumIOps != 0 || resources.IOMaximumBandwidth != 0 { return fmt.Errorf("resource updating isn't supported on Windows") } // update HostConfig of container if hostConfig.RestartPolicy.Name != "" { if container.HostConfig.AutoRemove && !hostConfig.RestartPolicy.IsNone() { return fmt.Errorf("Restart policy cannot be updated because AutoRemove is enabled for the container") } container.HostConfig.RestartPolicy = hostConfig.RestartPolicy } return nil } // BuildHostnameFile writes the container's hostname file. func (container *Container) BuildHostnameFile() error { return nil } // GetMountPoints gives a platform specific transformation to types.MountPoint. Callers must hold a Container lock. func (container *Container) GetMountPoints() []types.MountPoint { mountPoints := make([]types.MountPoint, 0, len(container.MountPoints)) for _, m := range container.MountPoints { mountPoints = append(mountPoints, types.MountPoint{ Type: m.Type, Name: m.Name, Source: m.Path(), Destination: m.Destination, Driver: m.Driver, RW: m.RW, }) } return mountPoints } func (container *Container) ConfigsDirPath() string { return filepath.Join(container.Root, "configs") } // ConfigFilePath returns the path to the on-disk location of a config. func (container *Container) ConfigFilePath(configRef swarmtypes.ConfigReference) (string, error) { return filepath.Join(container.ConfigsDirPath(), configRef.ConfigID), nil }
package container // import "github.com/docker/docker/container" import ( "fmt" "os" "path/filepath" "github.com/docker/docker/api/types" containertypes "github.com/docker/docker/api/types/container" swarmtypes "github.com/docker/docker/api/types/swarm" "github.com/docker/docker/pkg/system" ) const ( containerConfigMountPath = `C:\` containerSecretMountPath = `C:\ProgramData\Docker\secrets` containerInternalSecretMountPath = `C:\ProgramData\Docker\internal\secrets` containerInternalConfigsDirPath = `C:\ProgramData\Docker\internal\configs` // defaultStopSignal is the default syscall signal used to stop a container. defaultStopSignal = "SIGTERM" // defaultStopTimeout is the timeout (in seconds) for the shutdown call on a container defaultStopTimeout = 30 ) // UnmountIpcMount unmounts Ipc related mounts. // This is a NOOP on windows. func (container *Container) UnmountIpcMount() error { return nil } // IpcMounts returns the list of Ipc related mounts. func (container *Container) IpcMounts() []Mount { return nil } // CreateSecretSymlinks creates symlinks to files in the secret mount. func (container *Container) CreateSecretSymlinks() error { for _, r := range container.SecretReferences { if r.File == nil { continue } resolvedPath, _, err := container.ResolvePath(getSecretTargetPath(r)) if err != nil { return err } if err := system.MkdirAll(filepath.Dir(resolvedPath), 0); err != nil { return err } if err := os.Symlink(filepath.Join(containerInternalSecretMountPath, r.SecretID), resolvedPath); err != nil { return err } } return nil } // SecretMounts returns the mount for the secret path. // All secrets are stored in a single mount on Windows. Target symlinks are // created for each secret, pointing to the files in this mount. func (container *Container) SecretMounts() ([]Mount, error) { var mounts []Mount if len(container.SecretReferences) > 0 { src, err := container.SecretMountPath() if err != nil { return nil, err } mounts = append(mounts, Mount{ Source: src, Destination: containerInternalSecretMountPath, Writable: false, }) } return mounts, nil } // UnmountSecrets unmounts the fs for secrets func (container *Container) UnmountSecrets() error { p, err := container.SecretMountPath() if err != nil { return err } return os.RemoveAll(p) } // CreateConfigSymlinks creates symlinks to files in the config mount. func (container *Container) CreateConfigSymlinks() error { for _, configRef := range container.ConfigReferences { if configRef.File == nil { continue } resolvedPath, _, err := container.ResolvePath(getConfigTargetPath(configRef)) if err != nil { return err } if err := system.MkdirAll(filepath.Dir(resolvedPath), 0); err != nil { return err } if err := os.Symlink(filepath.Join(containerInternalConfigsDirPath, configRef.ConfigID), resolvedPath); err != nil { return err } } return nil } // ConfigMounts returns the mount for configs. // TODO: Right now Windows doesn't really have a "secure" storage for secrets, // however some configs may contain secrets. Once secure storage is worked out, // configs and secret handling should be merged. func (container *Container) ConfigMounts() []Mount { var mounts []Mount if len(container.ConfigReferences) > 0 { mounts = append(mounts, Mount{ Source: container.ConfigsDirPath(), Destination: containerInternalConfigsDirPath, Writable: false, }) } return mounts } // DetachAndUnmount unmounts all volumes. // On Windows it only delegates to `UnmountVolumes` since there is nothing to // force unmount. func (container *Container) DetachAndUnmount(volumeEventLog func(name, action string, attributes map[string]string)) error { return container.UnmountVolumes(volumeEventLog) } // TmpfsMounts returns the list of tmpfs mounts func (container *Container) TmpfsMounts() ([]Mount, error) { var mounts []Mount return mounts, nil } // UpdateContainer updates configuration of a container. Callers must hold a Lock on the Container. func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfig) error { resources := hostConfig.Resources if resources.CPUShares != 0 || resources.Memory != 0 || resources.NanoCPUs != 0 || resources.CgroupParent != "" || resources.BlkioWeight != 0 || len(resources.BlkioWeightDevice) != 0 || len(resources.BlkioDeviceReadBps) != 0 || len(resources.BlkioDeviceWriteBps) != 0 || len(resources.BlkioDeviceReadIOps) != 0 || len(resources.BlkioDeviceWriteIOps) != 0 || resources.CPUPeriod != 0 || resources.CPUQuota != 0 || resources.CPURealtimePeriod != 0 || resources.CPURealtimeRuntime != 0 || resources.CpusetCpus != "" || resources.CpusetMems != "" || len(resources.Devices) != 0 || len(resources.DeviceCgroupRules) != 0 || resources.KernelMemory != 0 || resources.MemoryReservation != 0 || resources.MemorySwap != 0 || resources.MemorySwappiness != nil || resources.OomKillDisable != nil || (resources.PidsLimit != nil && *resources.PidsLimit != 0) || len(resources.Ulimits) != 0 || resources.CPUCount != 0 || resources.CPUPercent != 0 || resources.IOMaximumIOps != 0 || resources.IOMaximumBandwidth != 0 { return fmt.Errorf("resource updating isn't supported on Windows") } // update HostConfig of container if hostConfig.RestartPolicy.Name != "" { if container.HostConfig.AutoRemove && !hostConfig.RestartPolicy.IsNone() { return fmt.Errorf("Restart policy cannot be updated because AutoRemove is enabled for the container") } container.HostConfig.RestartPolicy = hostConfig.RestartPolicy } return nil } // BuildHostnameFile writes the container's hostname file. func (container *Container) BuildHostnameFile() error { return nil } // GetMountPoints gives a platform specific transformation to types.MountPoint. Callers must hold a Container lock. func (container *Container) GetMountPoints() []types.MountPoint { mountPoints := make([]types.MountPoint, 0, len(container.MountPoints)) for _, m := range container.MountPoints { mountPoints = append(mountPoints, types.MountPoint{ Type: m.Type, Name: m.Name, Source: m.Path(), Destination: m.Destination, Driver: m.Driver, RW: m.RW, }) } return mountPoints } func (container *Container) ConfigsDirPath() string { return filepath.Join(container.Root, "configs") } // ConfigFilePath returns the path to the on-disk location of a config. func (container *Container) ConfigFilePath(configRef swarmtypes.ConfigReference) (string, error) { return filepath.Join(container.ConfigsDirPath(), configRef.ConfigID), nil }
thaJeztah
768a1de1d0f12f17f0f34271dadaac8ff4877c82
a44a8e54ce5213f55d5647916fae9599f1627bf1
Yes, it's the same as the `_unix` const now, but I decided to define it in both as it's unlikely going to change, and this keeps it together with the corresponding `defaultStopTimeout` (which _does_ differ between Unix and Windows)
thaJeztah
4,506
moby/moby
42,716
API: discard `/system/df` `type` parameter pre-1.42
Fixup #42559 **- What I did** Ensure the `type` parameter to `/system/df` is discarded for API versions lower than 1.42 **- How I did it** Check for the version in the endpoint
null
2021-08-06 15:39:32+00:00
2021-08-07 13:46:04+00:00
api/server/router/system/system_routes.go
package system // import "github.com/docker/docker/api/server/router/system" import ( "context" "encoding/json" "fmt" "net/http" "time" "github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/router/build" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/registry" timetypes "github.com/docker/docker/api/types/time" "github.com/docker/docker/api/types/versions" "github.com/docker/docker/pkg/ioutils" "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/errgroup" ) func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.WriteHeader(http.StatusOK) return nil } func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate") w.Header().Add("Pragma", "no-cache") builderVersion := build.BuilderVersion(*s.features) if bv := builderVersion; bv != "" { w.Header().Set("Builder-Version", string(bv)) } if r.Method == http.MethodHead { w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Header().Set("Content-Length", "0") return nil } _, err := w.Write([]byte{'O', 'K'}) return err } func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemInfo() if s.cluster != nil { info.Swarm = s.cluster.Info() info.Warnings = append(info.Warnings, info.Swarm.Warnings...) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") { // TODO: handle this conversion in engine-api type oldInfo struct { *types.Info ExecutionDriver string } old := &oldInfo{ Info: info, ExecutionDriver: "<not supported>", } nameOnlySecurityOptions := []string{} kvSecOpts, err := types.DecodeSecurityOptions(old.SecurityOptions) if err != nil { return err } for _, s := range kvSecOpts { nameOnlySecurityOptions = append(nameOnlySecurityOptions, s.Name) } old.SecurityOptions = nameOnlySecurityOptions return httputils.WriteJSON(w, http.StatusOK, old) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") { if info.KernelVersion == "" { info.KernelVersion = "<unknown>" } if info.OperatingSystem == "" { info.OperatingSystem = "<unknown>" } } return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemVersion() return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } var getContainers, getImages, getVolumes, getBuildCache bool if typeStrs, ok := r.Form["type"]; !ok { getContainers, getImages, getVolumes, getBuildCache = true, true, true, true } else { for _, typ := range typeStrs { switch types.DiskUsageObject(typ) { case types.ContainerObject: getContainers = true case types.ImageObject: getImages = true case types.VolumeObject: getVolumes = true case types.BuildCacheObject: getBuildCache = true default: return invalidRequestError{Err: fmt.Errorf("unknown object type: %s", typ)} } } } eg, ctx := errgroup.WithContext(ctx) var systemDiskUsage *types.DiskUsage if getContainers || getImages || getVolumes { eg.Go(func() error { var err error systemDiskUsage, err = s.backend.SystemDiskUsage(ctx, DiskUsageOptions{ Containers: getContainers, Images: getImages, Volumes: getVolumes, }) return err }) } var buildCache []*types.BuildCache if getBuildCache { eg.Go(func() error { var err error buildCache, err = s.builder.DiskUsage(ctx) if err != nil { return errors.Wrap(err, "error getting build cache usage") } if buildCache == nil { // Ensure empty `BuildCache` field is represented as empty JSON array(`[]`) // instead of `null` to be consistent with `Images`, `Containers` etc. buildCache = []*types.BuildCache{} } return nil }) } if err := eg.Wait(); err != nil { return err } var builderSize int64 if versions.LessThan(httputils.VersionFromContext(ctx), "1.42") { for _, b := range buildCache { builderSize += b.Size } } du := types.DiskUsage{ BuildCache: buildCache, BuilderSize: builderSize, } if systemDiskUsage != nil { du.LayersSize = systemDiskUsage.LayersSize du.Images = systemDiskUsage.Images du.Containers = systemDiskUsage.Containers du.Volumes = systemDiskUsage.Volumes } return httputils.WriteJSON(w, http.StatusOK, du) } type invalidRequestError struct { Err error } func (e invalidRequestError) Error() string { return e.Err.Error() } func (e invalidRequestError) InvalidParameter() {} func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } since, err := eventTime(r.Form.Get("since")) if err != nil { return err } until, err := eventTime(r.Form.Get("until")) if err != nil { return err } var ( timeout <-chan time.Time onlyPastEvents bool ) if !until.IsZero() { if until.Before(since) { return invalidRequestError{fmt.Errorf("`since` time (%s) cannot be after `until` time (%s)", r.Form.Get("since"), r.Form.Get("until"))} } now := time.Now() onlyPastEvents = until.Before(now) if !onlyPastEvents { dur := until.Sub(now) timer := time.NewTimer(dur) defer timer.Stop() timeout = timer.C } } ef, err := filters.FromJSON(r.Form.Get("filters")) if err != nil { return err } w.Header().Set("Content-Type", "application/json") output := ioutils.NewWriteFlusher(w) defer output.Close() output.Flush() enc := json.NewEncoder(output) buffered, l := s.backend.SubscribeToEvents(since, until, ef) defer s.backend.UnsubscribeFromEvents(l) for _, ev := range buffered { if err := enc.Encode(ev); err != nil { return err } } if onlyPastEvents { return nil } for { select { case ev := <-l: jev, ok := ev.(events.Message) if !ok { logrus.Warnf("unexpected event message: %q", ev) continue } if err := enc.Encode(jev); err != nil { return err } case <-timeout: return nil case <-ctx.Done(): logrus.Debug("Client context cancelled, stop sending events") return nil } } } func (s *systemRouter) postAuth(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { var config *types.AuthConfig err := json.NewDecoder(r.Body).Decode(&config) r.Body.Close() if err != nil { return err } status, token, err := s.backend.AuthenticateToRegistry(ctx, config) if err != nil { return err } return httputils.WriteJSON(w, http.StatusOK, &registry.AuthenticateOKBody{ Status: status, IdentityToken: token, }) } func eventTime(formTime string) (time.Time, error) { t, tNano, err := timetypes.ParseTimestamps(formTime, -1) if err != nil { return time.Time{}, err } if t == -1 { return time.Time{}, nil } return time.Unix(t, tNano), nil }
package system // import "github.com/docker/docker/api/server/router/system" import ( "context" "encoding/json" "fmt" "net/http" "time" "github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/router/build" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/registry" timetypes "github.com/docker/docker/api/types/time" "github.com/docker/docker/api/types/versions" "github.com/docker/docker/pkg/ioutils" "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/errgroup" ) func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.WriteHeader(http.StatusOK) return nil } func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate") w.Header().Add("Pragma", "no-cache") builderVersion := build.BuilderVersion(*s.features) if bv := builderVersion; bv != "" { w.Header().Set("Builder-Version", string(bv)) } if r.Method == http.MethodHead { w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Header().Set("Content-Length", "0") return nil } _, err := w.Write([]byte{'O', 'K'}) return err } func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemInfo() if s.cluster != nil { info.Swarm = s.cluster.Info() info.Warnings = append(info.Warnings, info.Swarm.Warnings...) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") { // TODO: handle this conversion in engine-api type oldInfo struct { *types.Info ExecutionDriver string } old := &oldInfo{ Info: info, ExecutionDriver: "<not supported>", } nameOnlySecurityOptions := []string{} kvSecOpts, err := types.DecodeSecurityOptions(old.SecurityOptions) if err != nil { return err } for _, s := range kvSecOpts { nameOnlySecurityOptions = append(nameOnlySecurityOptions, s.Name) } old.SecurityOptions = nameOnlySecurityOptions return httputils.WriteJSON(w, http.StatusOK, old) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") { if info.KernelVersion == "" { info.KernelVersion = "<unknown>" } if info.OperatingSystem == "" { info.OperatingSystem = "<unknown>" } } return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemVersion() return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } version := httputils.VersionFromContext(ctx) var getContainers, getImages, getVolumes, getBuildCache bool typeStrs, ok := r.Form["type"] if versions.LessThan(version, "1.42") || !ok { getContainers, getImages, getVolumes, getBuildCache = true, true, true, true } else { for _, typ := range typeStrs { switch types.DiskUsageObject(typ) { case types.ContainerObject: getContainers = true case types.ImageObject: getImages = true case types.VolumeObject: getVolumes = true case types.BuildCacheObject: getBuildCache = true default: return invalidRequestError{Err: fmt.Errorf("unknown object type: %s", typ)} } } } eg, ctx := errgroup.WithContext(ctx) var systemDiskUsage *types.DiskUsage if getContainers || getImages || getVolumes { eg.Go(func() error { var err error systemDiskUsage, err = s.backend.SystemDiskUsage(ctx, DiskUsageOptions{ Containers: getContainers, Images: getImages, Volumes: getVolumes, }) return err }) } var buildCache []*types.BuildCache if getBuildCache { eg.Go(func() error { var err error buildCache, err = s.builder.DiskUsage(ctx) if err != nil { return errors.Wrap(err, "error getting build cache usage") } if buildCache == nil { // Ensure empty `BuildCache` field is represented as empty JSON array(`[]`) // instead of `null` to be consistent with `Images`, `Containers` etc. buildCache = []*types.BuildCache{} } return nil }) } if err := eg.Wait(); err != nil { return err } var builderSize int64 if versions.LessThan(version, "1.42") { for _, b := range buildCache { builderSize += b.Size } } du := types.DiskUsage{ BuildCache: buildCache, BuilderSize: builderSize, } if systemDiskUsage != nil { du.LayersSize = systemDiskUsage.LayersSize du.Images = systemDiskUsage.Images du.Containers = systemDiskUsage.Containers du.Volumes = systemDiskUsage.Volumes } return httputils.WriteJSON(w, http.StatusOK, du) } type invalidRequestError struct { Err error } func (e invalidRequestError) Error() string { return e.Err.Error() } func (e invalidRequestError) InvalidParameter() {} func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } since, err := eventTime(r.Form.Get("since")) if err != nil { return err } until, err := eventTime(r.Form.Get("until")) if err != nil { return err } var ( timeout <-chan time.Time onlyPastEvents bool ) if !until.IsZero() { if until.Before(since) { return invalidRequestError{fmt.Errorf("`since` time (%s) cannot be after `until` time (%s)", r.Form.Get("since"), r.Form.Get("until"))} } now := time.Now() onlyPastEvents = until.Before(now) if !onlyPastEvents { dur := until.Sub(now) timer := time.NewTimer(dur) defer timer.Stop() timeout = timer.C } } ef, err := filters.FromJSON(r.Form.Get("filters")) if err != nil { return err } w.Header().Set("Content-Type", "application/json") output := ioutils.NewWriteFlusher(w) defer output.Close() output.Flush() enc := json.NewEncoder(output) buffered, l := s.backend.SubscribeToEvents(since, until, ef) defer s.backend.UnsubscribeFromEvents(l) for _, ev := range buffered { if err := enc.Encode(ev); err != nil { return err } } if onlyPastEvents { return nil } for { select { case ev := <-l: jev, ok := ev.(events.Message) if !ok { logrus.Warnf("unexpected event message: %q", ev) continue } if err := enc.Encode(jev); err != nil { return err } case <-timeout: return nil case <-ctx.Done(): logrus.Debug("Client context cancelled, stop sending events") return nil } } } func (s *systemRouter) postAuth(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { var config *types.AuthConfig err := json.NewDecoder(r.Body).Decode(&config) r.Body.Close() if err != nil { return err } status, token, err := s.backend.AuthenticateToRegistry(ctx, config) if err != nil { return err } return httputils.WriteJSON(w, http.StatusOK, &registry.AuthenticateOKBody{ Status: status, IdentityToken: token, }) } func eventTime(formTime string) (time.Time, error) { t, tNano, err := timetypes.ParseTimestamps(formTime, -1) if err != nil { return time.Time{}, err } if t == -1 { return time.Time{}, nil } return time.Unix(t, tNano), nil }
rvolosatovs
5e498e20f730d228d9e300c1faaf3898ccb85ce9
91dc595e96483184e090d653870eb95d95f96904
This is hard to read (for me). Can you split this `if` to multiple lines? I think that will help. So... ```go types, ok := r.Form["type"] if versions.LesstThan(version, "1.42") || !ok { // } else { // } ```
cpuguy83
4,507
moby/moby
42,715
Share disk usage computation results between concurrent invocations
**- What I did** Share disk usage computation results between concurrent invocations instead of throwing an error **- How I did it** - Use `x/sync/singleflight.Group`, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method. - Extract the disk usage computation functionality for containers and images for consistency with other object types and better separation of concerns. It also fits nicely with the current design. **- How to verify it** E.g. ``` docker system df& docker system df& docker system df ``` Or: ``` curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=container'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=volume'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=build-cache'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' ``` Such invocations do not error anymore, but just return the result once computed by one of the goroutines ** description for changeling** ```markdown The `GET /system/df` endpoint can now be used concurrently. If a request is made to the endpoint while a calculation is still running, the request will receive the result of the already running calculation, once completed. Previously, an error (`a disk usage operation is already running`) would be returned in this situation. ```
null
2021-08-06 15:04:00+00:00
2021-08-10 11:51:04+00:00
daemon/daemon.go
// Package daemon exposes the functions that occur on the host server // that the Docker daemon is running. // // In implementing the various functions of the daemon, there is often // a method-specific struct for configuring the runtime behavior. package daemon // import "github.com/docker/docker/daemon" import ( "context" "fmt" "io/ioutil" "net" "net/url" "os" "path" "path/filepath" "runtime" "strings" "sync" "time" "github.com/docker/docker/pkg/fileutils" "go.etcd.io/bbolt" "google.golang.org/grpc" "google.golang.org/grpc/backoff" "github.com/containerd/containerd" "github.com/containerd/containerd/defaults" "github.com/containerd/containerd/pkg/dialer" "github.com/containerd/containerd/pkg/userns" "github.com/containerd/containerd/remotes/docker" "github.com/docker/docker/api/types" containertypes "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/swarm" "github.com/docker/docker/builder" "github.com/docker/docker/container" "github.com/docker/docker/daemon/config" "github.com/docker/docker/daemon/discovery" "github.com/docker/docker/daemon/events" "github.com/docker/docker/daemon/exec" "github.com/docker/docker/daemon/images" "github.com/docker/docker/daemon/logger" "github.com/docker/docker/daemon/network" "github.com/docker/docker/errdefs" "github.com/moby/buildkit/util/resolver" "github.com/sirupsen/logrus" // register graph drivers _ "github.com/docker/docker/daemon/graphdriver/register" "github.com/docker/docker/daemon/stats" dmetadata "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/dockerversion" "github.com/docker/docker/image" "github.com/docker/docker/layer" "github.com/docker/docker/libcontainerd" libcontainerdtypes "github.com/docker/docker/libcontainerd/types" "github.com/docker/docker/libnetwork" "github.com/docker/docker/libnetwork/cluster" nwconfig "github.com/docker/docker/libnetwork/config" "github.com/docker/docker/pkg/idtools" "github.com/docker/docker/pkg/plugingetter" "github.com/docker/docker/pkg/system" "github.com/docker/docker/pkg/truncindex" "github.com/docker/docker/plugin" pluginexec "github.com/docker/docker/plugin/executor/containerd" refstore "github.com/docker/docker/reference" "github.com/docker/docker/registry" "github.com/docker/docker/runconfig" volumesservice "github.com/docker/docker/volume/service" "github.com/moby/locker" "github.com/pkg/errors" "golang.org/x/sync/semaphore" ) // ContainersNamespace is the name of the namespace used for users containers const ( ContainersNamespace = "moby" ) var ( errSystemNotSupported = errors.New("the Docker daemon is not supported on this platform") ) // Daemon holds information about the Docker daemon. type Daemon struct { ID string repository string containers container.Store containersReplica container.ViewDB execCommands *exec.Store imageService *images.ImageService idIndex *truncindex.TruncIndex configStore *config.Config statsCollector *stats.Collector defaultLogConfig containertypes.LogConfig RegistryService registry.Service EventsService *events.Events netController libnetwork.NetworkController volumes *volumesservice.VolumesService discoveryWatcher discovery.Reloader root string seccompEnabled bool apparmorEnabled bool shutdown bool idMapping *idtools.IdentityMapping graphDriver string // TODO: move graphDriver field to an InfoService PluginStore *plugin.Store // TODO: remove pluginManager *plugin.Manager linkIndex *linkIndex containerdCli *containerd.Client containerd libcontainerdtypes.Client defaultIsolation containertypes.Isolation // Default isolation mode on Windows clusterProvider cluster.Provider cluster Cluster genericResources []swarm.GenericResource metricsPluginListener net.Listener machineMemory uint64 seccompProfile []byte seccompProfilePath string diskUsageRunning int32 pruneRunning int32 hosts map[string]bool // hosts stores the addresses the daemon is listening on startupDone chan struct{} attachmentStore network.AttachmentStore attachableNetworkLock *locker.Locker // This is used for Windows which doesn't currently support running on containerd // It stores metadata for the content store (used for manifest caching) // This needs to be closed on daemon exit mdDB *bbolt.DB } // StoreHosts stores the addresses the daemon is listening on func (daemon *Daemon) StoreHosts(hosts []string) { if daemon.hosts == nil { daemon.hosts = make(map[string]bool) } for _, h := range hosts { daemon.hosts[h] = true } } // HasExperimental returns whether the experimental features of the daemon are enabled or not func (daemon *Daemon) HasExperimental() bool { return daemon.configStore != nil && daemon.configStore.Experimental } // Features returns the features map from configStore func (daemon *Daemon) Features() *map[string]bool { return &daemon.configStore.Features } // RegistryHosts returns registry configuration in containerd resolvers format func (daemon *Daemon) RegistryHosts() docker.RegistryHosts { var ( registryKey = "docker.io" mirrors = make([]string, len(daemon.configStore.Mirrors)) m = map[string]resolver.RegistryConfig{} ) // must trim "https://" or "http://" prefix for i, v := range daemon.configStore.Mirrors { if uri, err := url.Parse(v); err == nil { v = uri.Host } mirrors[i] = v } // set mirrors for default registry m[registryKey] = resolver.RegistryConfig{Mirrors: mirrors} for _, v := range daemon.configStore.InsecureRegistries { u, err := url.Parse(v) c := resolver.RegistryConfig{} if err == nil { v = u.Host t := true if u.Scheme == "http" { c.PlainHTTP = &t } else { c.Insecure = &t } } m[v] = c } for k, v := range m { if d, err := registry.HostCertsDir(k); err == nil { v.TLSConfigDir = []string{d} m[k] = v } } certsDir := registry.CertsDir() if fis, err := ioutil.ReadDir(certsDir); err == nil { for _, fi := range fis { if _, ok := m[fi.Name()]; !ok { m[fi.Name()] = resolver.RegistryConfig{ TLSConfigDir: []string{filepath.Join(certsDir, fi.Name())}, } } } } return resolver.NewRegistryConfig(m) } func (daemon *Daemon) restore() error { var mapLock sync.Mutex containers := make(map[string]*container.Container) logrus.Info("Loading containers: start.") dir, err := ioutil.ReadDir(daemon.repository) if err != nil { return err } // parallelLimit is the maximum number of parallel startup jobs that we // allow (this is the limited used for all startup semaphores). The multipler // (128) was chosen after some fairly significant benchmarking -- don't change // it unless you've tested it significantly (this value is adjusted if // RLIMIT_NOFILE is small to avoid EMFILE). parallelLimit := adjustParallelLimit(len(dir), 128*runtime.NumCPU()) // Re-used for all parallel startup jobs. var group sync.WaitGroup sem := semaphore.NewWeighted(int64(parallelLimit)) for _, v := range dir { group.Add(1) go func(id string) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", id) c, err := daemon.load(id) if err != nil { log.WithError(err).Error("failed to load container") return } if !system.IsOSSupported(c.OS) { log.Errorf("failed to load container: %s (%q)", system.ErrNotSupportedOperatingSystem, c.OS) return } // Ignore the container if it does not support the current driver being used by the graph if (c.Driver == "" && daemon.graphDriver == "aufs") || c.Driver == daemon.graphDriver { rwlayer, err := daemon.imageService.GetLayerByID(c.ID) if err != nil { log.WithError(err).Error("failed to load container mount") return } c.RWLayer = rwlayer log.WithFields(logrus.Fields{ "running": c.IsRunning(), "paused": c.IsPaused(), }).Debug("loaded container") mapLock.Lock() containers[c.ID] = c mapLock.Unlock() } else { log.Debugf("cannot load container because it was created with another storage driver") } }(v.Name()) } group.Wait() removeContainers := make(map[string]*container.Container) restartContainers := make(map[*container.Container]chan struct{}) activeSandboxes := make(map[string]interface{}) for _, c := range containers { group.Add(1) go func(c *container.Container) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", c.ID) if err := daemon.registerName(c); err != nil { log.WithError(err).Errorf("failed to register container name: %s", c.Name) mapLock.Lock() delete(containers, c.ID) mapLock.Unlock() return } if err := daemon.Register(c); err != nil { log.WithError(err).Error("failed to register container") mapLock.Lock() delete(containers, c.ID) mapLock.Unlock() return } }(c) } group.Wait() for _, c := range containers { group.Add(1) go func(c *container.Container) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", c.ID) daemon.backportMountSpec(c) if err := daemon.checkpointAndSave(c); err != nil { log.WithError(err).Error("error saving backported mountspec to disk") } daemon.setStateCounter(c) logger := func(c *container.Container) *logrus.Entry { return log.WithFields(logrus.Fields{ "running": c.IsRunning(), "paused": c.IsPaused(), "restarting": c.IsRestarting(), }) } logger(c).Debug("restoring container") var ( err error alive bool ec uint32 exitedAt time.Time process libcontainerdtypes.Process ) alive, _, process, err = daemon.containerd.Restore(context.Background(), c.ID, c.InitializeStdio) if err != nil && !errdefs.IsNotFound(err) { logger(c).WithError(err).Error("failed to restore container with containerd") return } logger(c).Debugf("alive: %v", alive) if !alive { // If process is not nil, cleanup dead container from containerd. // If process is nil then the above `containerd.Restore` returned an errdefs.NotFoundError, // and docker's view of the container state will be updated accorrdingly via SetStopped further down. if process != nil { logger(c).Debug("cleaning up dead container process") ec, exitedAt, err = process.Delete(context.Background()) if err != nil && !errdefs.IsNotFound(err) { logger(c).WithError(err).Error("failed to delete container from containerd") return } } } else if !daemon.configStore.LiveRestoreEnabled { logger(c).Debug("shutting down container considered alive by containerd") if err := daemon.shutdownContainer(c); err != nil && !errdefs.IsNotFound(err) { log.WithError(err).Error("error shutting down container") return } c.ResetRestartManager(false) } if c.IsRunning() || c.IsPaused() { logger(c).Debug("syncing container on disk state with real state") c.RestartManager().Cancel() // manually start containers because some need to wait for swarm networking if c.IsPaused() && alive { s, err := daemon.containerd.Status(context.Background(), c.ID) if err != nil { logger(c).WithError(err).Error("failed to get container status") } else { logger(c).WithField("state", s).Info("restored container paused") switch s { case containerd.Paused, containerd.Pausing: // nothing to do case containerd.Stopped: alive = false case containerd.Unknown: log.Error("unknown status for paused container during restore") default: // running c.Lock() c.Paused = false daemon.setStateCounter(c) if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update paused container state") } c.Unlock() } } } if !alive { logger(c).Debug("setting stopped state") c.Lock() c.SetStopped(&container.ExitStatus{ExitCode: int(ec), ExitedAt: exitedAt}) daemon.Cleanup(c) if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update stopped container state") } c.Unlock() logger(c).Debug("set stopped state") } // we call Mount and then Unmount to get BaseFs of the container if err := daemon.Mount(c); err != nil { // The mount is unlikely to fail. However, in case mount fails // the container should be allowed to restore here. Some functionalities // (like docker exec -u user) might be missing but container is able to be // stopped/restarted/removed. // See #29365 for related information. // The error is only logged here. logger(c).WithError(err).Warn("failed to mount container to get BaseFs path") } else { if err := daemon.Unmount(c); err != nil { logger(c).WithError(err).Warn("failed to umount container to get BaseFs path") } } c.ResetRestartManager(false) if !c.HostConfig.NetworkMode.IsContainer() && c.IsRunning() { options, err := daemon.buildSandboxOptions(c) if err != nil { logger(c).WithError(err).Warn("failed to build sandbox option to restore container") } mapLock.Lock() activeSandboxes[c.NetworkSettings.SandboxID] = options mapLock.Unlock() } } // get list of containers we need to restart // Do not autostart containers which // has endpoints in a swarm scope // network yet since the cluster is // not initialized yet. We will start // it after the cluster is // initialized. if daemon.configStore.AutoRestart && c.ShouldRestart() && !c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore { mapLock.Lock() restartContainers[c] = make(chan struct{}) mapLock.Unlock() } else if c.HostConfig != nil && c.HostConfig.AutoRemove { mapLock.Lock() removeContainers[c.ID] = c mapLock.Unlock() } c.Lock() if c.RemovalInProgress { // We probably crashed in the middle of a removal, reset // the flag. // // We DO NOT remove the container here as we do not // know if the user had requested for either the // associated volumes, network links or both to also // be removed. So we put the container in the "dead" // state and leave further processing up to them. c.RemovalInProgress = false c.Dead = true if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update RemovalInProgress container state") } else { log.Debugf("reset RemovalInProgress state for container") } } c.Unlock() logger(c).Debug("done restoring container") }(c) } group.Wait() daemon.netController, err = daemon.initNetworkController(daemon.configStore, activeSandboxes) if err != nil { return fmt.Errorf("Error initializing network controller: %v", err) } // Now that all the containers are registered, register the links for _, c := range containers { group.Add(1) go func(c *container.Container) { _ = sem.Acquire(context.Background(), 1) if err := daemon.registerLinks(c, c.HostConfig); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to register link for container") } sem.Release(1) group.Done() }(c) } group.Wait() for c, notifier := range restartContainers { group.Add(1) go func(c *container.Container, chNotify chan struct{}) { _ = sem.Acquire(context.Background(), 1) log := logrus.WithField("container", c.ID) log.Debug("starting container") // ignore errors here as this is a best effort to wait for children to be // running before we try to start the container children := daemon.children(c) timeout := time.NewTimer(5 * time.Second) defer timeout.Stop() for _, child := range children { if notifier, exists := restartContainers[child]; exists { select { case <-notifier: case <-timeout.C: } } } // Make sure networks are available before starting daemon.waitForNetworks(c) if err := daemon.containerStart(c, "", "", true); err != nil { log.WithError(err).Error("failed to start container") } close(chNotify) sem.Release(1) group.Done() }(c, notifier) } group.Wait() for id := range removeContainers { group.Add(1) go func(cid string) { _ = sem.Acquire(context.Background(), 1) if err := daemon.ContainerRm(cid, &types.ContainerRmConfig{ForceRemove: true, RemoveVolume: true}); err != nil { logrus.WithField("container", cid).WithError(err).Error("failed to remove container") } sem.Release(1) group.Done() }(id) } group.Wait() // any containers that were started above would already have had this done, // however we need to now prepare the mountpoints for the rest of the containers as well. // This shouldn't cause any issue running on the containers that already had this run. // This must be run after any containers with a restart policy so that containerized plugins // can have a chance to be running before we try to initialize them. for _, c := range containers { // if the container has restart policy, do not // prepare the mountpoints since it has been done on restarting. // This is to speed up the daemon start when a restart container // has a volume and the volume driver is not available. if _, ok := restartContainers[c]; ok { continue } else if _, ok := removeContainers[c.ID]; ok { // container is automatically removed, skip it. continue } group.Add(1) go func(c *container.Container) { _ = sem.Acquire(context.Background(), 1) if err := daemon.prepareMountPoints(c); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to prepare mountpoints for container") } sem.Release(1) group.Done() }(c) } group.Wait() logrus.Info("Loading containers: done.") return nil } // RestartSwarmContainers restarts any autostart container which has a // swarm endpoint. func (daemon *Daemon) RestartSwarmContainers() { ctx := context.Background() // parallelLimit is the maximum number of parallel startup jobs that we // allow (this is the limited used for all startup semaphores). The multipler // (128) was chosen after some fairly significant benchmarking -- don't change // it unless you've tested it significantly (this value is adjusted if // RLIMIT_NOFILE is small to avoid EMFILE). parallelLimit := adjustParallelLimit(len(daemon.List()), 128*runtime.NumCPU()) var group sync.WaitGroup sem := semaphore.NewWeighted(int64(parallelLimit)) for _, c := range daemon.List() { if !c.IsRunning() && !c.IsPaused() { // Autostart all the containers which has a // swarm endpoint now that the cluster is // initialized. if daemon.configStore.AutoRestart && c.ShouldRestart() && c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore { group.Add(1) go func(c *container.Container) { if err := sem.Acquire(ctx, 1); err != nil { // ctx is done. group.Done() return } if err := daemon.containerStart(c, "", "", true); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to start swarm container") } sem.Release(1) group.Done() }(c) } } } group.Wait() } // waitForNetworks is used during daemon initialization when starting up containers // It ensures that all of a container's networks are available before the daemon tries to start the container. // In practice it just makes sure the discovery service is available for containers which use a network that require discovery. func (daemon *Daemon) waitForNetworks(c *container.Container) { if daemon.discoveryWatcher == nil { return } // Make sure if the container has a network that requires discovery that the discovery service is available before starting for netName := range c.NetworkSettings.Networks { // If we get `ErrNoSuchNetwork` here, we can assume that it is due to discovery not being ready // Most likely this is because the K/V store used for discovery is in a container and needs to be started if _, err := daemon.netController.NetworkByName(netName); err != nil { if _, ok := err.(libnetwork.ErrNoSuchNetwork); !ok { continue } // use a longish timeout here due to some slowdowns in libnetwork if the k/v store is on anything other than --net=host // FIXME: why is this slow??? dur := 60 * time.Second timer := time.NewTimer(dur) logrus.WithField("container", c.ID).Debugf("Container %s waiting for network to be ready", c.Name) select { case <-daemon.discoveryWatcher.ReadyCh(): case <-timer.C: } timer.Stop() return } } } func (daemon *Daemon) children(c *container.Container) map[string]*container.Container { return daemon.linkIndex.children(c) } // parents returns the names of the parent containers of the container // with the given name. func (daemon *Daemon) parents(c *container.Container) map[string]*container.Container { return daemon.linkIndex.parents(c) } func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error { fullName := path.Join(parent.Name, alias) if err := daemon.containersReplica.ReserveName(fullName, child.ID); err != nil { if err == container.ErrNameReserved { logrus.Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err) return nil } return err } daemon.linkIndex.link(parent, child, fullName) return nil } // DaemonJoinsCluster informs the daemon has joined the cluster and provides // the handler to query the cluster component func (daemon *Daemon) DaemonJoinsCluster(clusterProvider cluster.Provider) { daemon.setClusterProvider(clusterProvider) } // DaemonLeavesCluster informs the daemon has left the cluster func (daemon *Daemon) DaemonLeavesCluster() { // Daemon is in charge of removing the attachable networks with // connected containers when the node leaves the swarm daemon.clearAttachableNetworks() // We no longer need the cluster provider, stop it now so that // the network agent will stop listening to cluster events. daemon.setClusterProvider(nil) // Wait for the networking cluster agent to stop daemon.netController.AgentStopWait() // Daemon is in charge of removing the ingress network when the // node leaves the swarm. Wait for job to be done or timeout. // This is called also on graceful daemon shutdown. We need to // wait, because the ingress release has to happen before the // network controller is stopped. if done, err := daemon.ReleaseIngress(); err == nil { timeout := time.NewTimer(5 * time.Second) defer timeout.Stop() select { case <-done: case <-timeout.C: logrus.Warn("timeout while waiting for ingress network removal") } } else { logrus.Warnf("failed to initiate ingress network removal: %v", err) } daemon.attachmentStore.ClearAttachments() } // setClusterProvider sets a component for querying the current cluster state. func (daemon *Daemon) setClusterProvider(clusterProvider cluster.Provider) { daemon.clusterProvider = clusterProvider daemon.netController.SetClusterProvider(clusterProvider) daemon.attachableNetworkLock = locker.New() } // IsSwarmCompatible verifies if the current daemon // configuration is compatible with the swarm mode func (daemon *Daemon) IsSwarmCompatible() error { if daemon.configStore == nil { return nil } return daemon.configStore.IsSwarmCompatible() } // NewDaemon sets up everything for the daemon to be able to service // requests from the webserver. func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.Store) (daemon *Daemon, err error) { setDefaultMtu(config) registryService, err := registry.NewService(config.ServiceOptions) if err != nil { return nil, err } // Ensure that we have a correct root key limit for launching containers. if err := modifyRootKeyLimit(); err != nil { logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err) } // Ensure we have compatible and valid configuration options if err := verifyDaemonSettings(config); err != nil { return nil, err } // Do we have a disabled network? config.DisableBridge = isBridgeNetworkDisabled(config) // Setup the resolv.conf setupResolvConf(config) // Verify the platform is supported as a daemon if !platformSupported { return nil, errSystemNotSupported } // Validate platform-specific requirements if err := checkSystem(); err != nil { return nil, err } idMapping, err := setupRemappedRoot(config) if err != nil { return nil, err } rootIDs := idMapping.RootPair() if err := setupDaemonProcess(config); err != nil { return nil, err } // set up the tmpDir to use a canonical path tmp, err := prepareTempDir(config.Root) if err != nil { return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err) } realTmp, err := fileutils.ReadSymlinkedDirectory(tmp) if err != nil { return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err) } if isWindows { if _, err := os.Stat(realTmp); err != nil && os.IsNotExist(err) { if err := system.MkdirAll(realTmp, 0700); err != nil { return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err) } } os.Setenv("TEMP", realTmp) os.Setenv("TMP", realTmp) } else { os.Setenv("TMPDIR", realTmp) } d := &Daemon{ configStore: config, PluginStore: pluginStore, startupDone: make(chan struct{}), } // Ensure the daemon is properly shutdown if there is a failure during // initialization defer func() { if err != nil { if err := d.Shutdown(); err != nil { logrus.Error(err) } } }() if err := d.setGenericResources(config); err != nil { return nil, err } // set up SIGUSR1 handler on Unix-like systems, or a Win32 global event // on Windows to dump Go routine stacks stackDumpDir := config.Root if execRoot := config.GetExecRoot(); execRoot != "" { stackDumpDir = execRoot } d.setupDumpStackTrap(stackDumpDir) if err := d.setupSeccompProfile(); err != nil { return nil, err } // Set the default isolation mode (only applicable on Windows) if err := d.setDefaultIsolation(); err != nil { return nil, fmt.Errorf("error setting default isolation mode: %v", err) } if err := configureMaxThreads(config); err != nil { logrus.Warnf("Failed to configure golang's threads limit: %v", err) } // ensureDefaultAppArmorProfile does nothing if apparmor is disabled if err := ensureDefaultAppArmorProfile(); err != nil { logrus.Errorf(err.Error()) } daemonRepo := filepath.Join(config.Root, "containers") if err := idtools.MkdirAllAndChown(daemonRepo, 0701, idtools.CurrentIdentity()); err != nil { return nil, err } // Create the directory where we'll store the runtime scripts (i.e. in // order to support runtimeArgs) daemonRuntimes := filepath.Join(config.Root, "runtimes") if err := system.MkdirAll(daemonRuntimes, 0700); err != nil { return nil, err } if err := d.loadRuntimes(); err != nil { return nil, err } if isWindows { if err := system.MkdirAll(filepath.Join(config.Root, "credentialspecs"), 0); err != nil { return nil, err } } if isWindows { // On Windows we don't support the environment variable, or a user supplied graphdriver d.graphDriver = "windowsfilter" } else { // Unix platforms however run a single graphdriver for all containers, and it can // be set through an environment variable, a daemon start parameter, or chosen through // initialization of the layerstore through driver priority order for example. if drv := os.Getenv("DOCKER_DRIVER"); drv != "" { d.graphDriver = drv logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", drv) } else { d.graphDriver = config.GraphDriver // May still be empty. Layerstore init determines instead. } } d.RegistryService = registryService logger.RegisterPluginGetter(d.PluginStore) metricsSockPath, err := d.listenMetricsSock() if err != nil { return nil, err } registerMetricsPluginCallback(d.PluginStore, metricsSockPath) backoffConfig := backoff.DefaultConfig backoffConfig.MaxDelay = 3 * time.Second connParams := grpc.ConnectParams{ Backoff: backoffConfig, } gopts := []grpc.DialOption{ // WithBlock makes sure that the following containerd request // is reliable. // // NOTE: In one edge case with high load pressure, kernel kills // dockerd, containerd and containerd-shims caused by OOM. // When both dockerd and containerd restart, but containerd // will take time to recover all the existing containers. Before // containerd serving, dockerd will failed with gRPC error. // That bad thing is that restore action will still ignore the // any non-NotFound errors and returns running state for // already stopped container. It is unexpected behavior. And // we need to restart dockerd to make sure that anything is OK. // // It is painful. Add WithBlock can prevent the edge case. And // n common case, the containerd will be serving in shortly. // It is not harm to add WithBlock for containerd connection. grpc.WithBlock(), grpc.WithInsecure(), grpc.WithConnectParams(connParams), grpc.WithContextDialer(dialer.ContextDialer), // TODO(stevvooe): We may need to allow configuration of this on the client. grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)), grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)), } if config.ContainerdAddr != "" { d.containerdCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second)) if err != nil { return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr) } } createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) { var pluginCli *containerd.Client // Windows is not currently using containerd, keep the // client as nil if config.ContainerdAddr != "" { pluginCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdPluginNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second)) if err != nil { return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr) } } var rt types.Runtime if runtime.GOOS != "windows" { rtPtr, err := d.getRuntime(config.GetDefaultRuntimeName()) if err != nil { return nil, err } rt = *rtPtr } return pluginexec.New(ctx, getPluginExecRoot(config.Root), pluginCli, config.ContainerdPluginNamespace, m, rt) } // Plugin system initialization should happen before restore. Do not change order. d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{ Root: filepath.Join(config.Root, "plugins"), ExecRoot: getPluginExecRoot(config.Root), Store: d.PluginStore, CreateExecutor: createPluginExec, RegistryService: registryService, LiveRestoreEnabled: config.LiveRestoreEnabled, LogPluginEvent: d.LogPluginEvent, // todo: make private AuthzMiddleware: config.AuthzMiddleware, }) if err != nil { return nil, errors.Wrap(err, "couldn't create plugin manager") } if err := d.setupDefaultLogConfig(); err != nil { return nil, err } layerStore, err := layer.NewStoreFromOptions(layer.StoreOptions{ Root: config.Root, MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"), GraphDriver: d.graphDriver, GraphDriverOptions: config.GraphOptions, IDMapping: idMapping, PluginGetter: d.PluginStore, ExperimentalEnabled: config.Experimental, OS: runtime.GOOS, }) if err != nil { return nil, err } // As layerstore initialization may set the driver d.graphDriver = layerStore.DriverName() // Configure and validate the kernels security support. Note this is a Linux/FreeBSD // operation only, so it is safe to pass *just* the runtime OS graphdriver. if err := configureKernelSecuritySupport(config, d.graphDriver); err != nil { return nil, err } imageRoot := filepath.Join(config.Root, "image", d.graphDriver) ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb")) if err != nil { return nil, err } imageStore, err := image.NewImageStore(ifs, layerStore) if err != nil { return nil, err } d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d) if err != nil { return nil, err } trustKey, err := loadOrCreateTrustKey(config.TrustKeyPath) if err != nil { return nil, err } trustDir := filepath.Join(config.Root, "trust") if err := system.MkdirAll(trustDir, 0700); err != nil { return nil, err } // We have a single tag/reference store for the daemon globally. However, it's // stored under the graphdriver. On host platforms which only support a single // container OS, but multiple selectable graphdrivers, this means depending on which // graphdriver is chosen, the global reference store is under there. For // platforms which support multiple container operating systems, this is slightly // more problematic as where does the global ref store get located? Fortunately, // for Windows, which is currently the only daemon supporting multiple container // operating systems, the list of graphdrivers available isn't user configurable. // For backwards compatibility, we just put it under the windowsfilter // directory regardless. refStoreLocation := filepath.Join(imageRoot, `repositories.json`) rs, err := refstore.NewReferenceStore(refStoreLocation) if err != nil { return nil, fmt.Errorf("Couldn't create reference store repository: %s", err) } distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution")) if err != nil { return nil, err } // Discovery is only enabled when the daemon is launched with an address to advertise. When // initialized, the daemon is registered and we can store the discovery backend as it's read-only if err := d.initDiscovery(config); err != nil { return nil, err } sysInfo := d.RawSysInfo() for _, w := range sysInfo.Warnings { logrus.Warn(w) } // Check if Devices cgroup is mounted, it is hard requirement for container security, // on Linux. if runtime.GOOS == "linux" && !sysInfo.CgroupDevicesEnabled && !userns.RunningInUserNS() { return nil, errors.New("Devices cgroup isn't mounted") } d.ID = trustKey.PublicKey().KeyID() d.repository = daemonRepo d.containers = container.NewMemoryStore() if d.containersReplica, err = container.NewViewDB(); err != nil { return nil, err } d.execCommands = exec.NewStore() d.idIndex = truncindex.NewTruncIndex([]string{}) d.statsCollector = d.newStatsCollector(1 * time.Second) d.EventsService = events.New() d.root = config.Root d.idMapping = idMapping d.seccompEnabled = sysInfo.Seccomp d.apparmorEnabled = sysInfo.AppArmor d.linkIndex = newLinkIndex() imgSvcConfig := images.ImageServiceConfig{ ContainerStore: d.containers, DistributionMetadataStore: distributionMetadataStore, EventsService: d.EventsService, ImageStore: imageStore, LayerStore: layerStore, MaxConcurrentDownloads: *config.MaxConcurrentDownloads, MaxConcurrentUploads: *config.MaxConcurrentUploads, MaxDownloadAttempts: *config.MaxDownloadAttempts, ReferenceStore: rs, RegistryService: registryService, TrustKey: trustKey, ContentNamespace: config.ContainerdNamespace, } // containerd is not currently supported with Windows. // So sometimes d.containerdCli will be nil // In that case we'll create a local content store... but otherwise we'll use containerd if d.containerdCli != nil { imgSvcConfig.Leases = d.containerdCli.LeasesService() imgSvcConfig.ContentStore = d.containerdCli.ContentStore() } else { cs, lm, err := d.configureLocalContentStore() if err != nil { return nil, err } imgSvcConfig.ContentStore = cs imgSvcConfig.Leases = lm } // TODO: imageStore, distributionMetadataStore, and ReferenceStore are only // used above to run migration. They could be initialized in ImageService // if migration is called from daemon/images. layerStore might move as well. d.imageService = images.NewImageService(imgSvcConfig) go d.execCommandGC() d.containerd, err = libcontainerd.NewClient(ctx, d.containerdCli, filepath.Join(config.ExecRoot, "containerd"), config.ContainerdNamespace, d) if err != nil { return nil, err } if err := d.restore(); err != nil { return nil, err } close(d.startupDone) info := d.SystemInfo() engineInfo.WithValues( dockerversion.Version, dockerversion.GitCommit, info.Architecture, info.Driver, info.KernelVersion, info.OperatingSystem, info.OSType, info.OSVersion, info.ID, ).Set(1) engineCpus.Set(float64(info.NCPU)) engineMemory.Set(float64(info.MemTotal)) logrus.WithFields(logrus.Fields{ "version": dockerversion.Version, "commit": dockerversion.GitCommit, "graphdriver": d.graphDriver, }).Info("Docker daemon") return d, nil } // DistributionServices returns services controlling daemon storage func (daemon *Daemon) DistributionServices() images.DistributionServices { return daemon.imageService.DistributionServices() } func (daemon *Daemon) waitForStartupDone() { <-daemon.startupDone } func (daemon *Daemon) shutdownContainer(c *container.Container) error { stopTimeout := c.StopTimeout() // If container failed to exit in stopTimeout seconds of SIGTERM, then using the force if err := daemon.containerStop(c, stopTimeout); err != nil { return fmt.Errorf("Failed to stop container %s with error: %v", c.ID, err) } // Wait without timeout for the container to exit. // Ignore the result. <-c.Wait(context.Background(), container.WaitConditionNotRunning) return nil } // ShutdownTimeout returns the timeout (in seconds) before containers are forcibly // killed during shutdown. The default timeout can be configured both on the daemon // and per container, and the longest timeout will be used. A grace-period of // 5 seconds is added to the configured timeout. // // A negative (-1) timeout means "indefinitely", which means that containers // are not forcibly killed, and the daemon shuts down after all containers exit. func (daemon *Daemon) ShutdownTimeout() int { shutdownTimeout := daemon.configStore.ShutdownTimeout if shutdownTimeout < 0 { return -1 } if daemon.containers == nil { return shutdownTimeout } graceTimeout := 5 for _, c := range daemon.containers.List() { stopTimeout := c.StopTimeout() if stopTimeout < 0 { return -1 } if stopTimeout+graceTimeout > shutdownTimeout { shutdownTimeout = stopTimeout + graceTimeout } } return shutdownTimeout } // Shutdown stops the daemon. func (daemon *Daemon) Shutdown() error { daemon.shutdown = true // Keep mounts and networking running on daemon shutdown if // we are to keep containers running and restore them. if daemon.configStore.LiveRestoreEnabled && daemon.containers != nil { // check if there are any running containers, if none we should do some cleanup if ls, err := daemon.Containers(&types.ContainerListOptions{}); len(ls) != 0 || err != nil { // metrics plugins still need some cleanup daemon.cleanupMetricsPlugins() return nil } } if daemon.containers != nil { logrus.Debugf("daemon configured with a %d seconds minimum shutdown timeout", daemon.configStore.ShutdownTimeout) logrus.Debugf("start clean shutdown of all containers with a %d seconds timeout...", daemon.ShutdownTimeout()) daemon.containers.ApplyAll(func(c *container.Container) { if !c.IsRunning() { return } log := logrus.WithField("container", c.ID) log.Debug("shutting down container") if err := daemon.shutdownContainer(c); err != nil { log.WithError(err).Error("failed to shut down container") return } if mountid, err := daemon.imageService.GetLayerMountID(c.ID); err == nil { daemon.cleanupMountsByID(mountid) } log.Debugf("shut down container") }) } if daemon.volumes != nil { if err := daemon.volumes.Shutdown(); err != nil { logrus.Errorf("Error shutting down volume store: %v", err) } } if daemon.imageService != nil { daemon.imageService.Cleanup() } // If we are part of a cluster, clean up cluster's stuff if daemon.clusterProvider != nil { logrus.Debugf("start clean shutdown of cluster resources...") daemon.DaemonLeavesCluster() } daemon.cleanupMetricsPlugins() // Shutdown plugins after containers and layerstore. Don't change the order. daemon.pluginShutdown() // trigger libnetwork Stop only if it's initialized if daemon.netController != nil { daemon.netController.Stop() } if daemon.containerdCli != nil { daemon.containerdCli.Close() } if daemon.mdDB != nil { daemon.mdDB.Close() } return daemon.cleanupMounts() } // Mount sets container.BaseFS // (is it not set coming in? why is it unset?) func (daemon *Daemon) Mount(container *container.Container) error { if container.RWLayer == nil { return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil") } dir, err := container.RWLayer.Mount(container.GetMountLabel()) if err != nil { return err } logrus.WithField("container", container.ID).Debugf("container mounted via layerStore: %v", dir) if container.BaseFS != nil && container.BaseFS.Path() != dir.Path() { // The mount path reported by the graph driver should always be trusted on Windows, since the // volume path for a given mounted layer may change over time. This should only be an error // on non-Windows operating systems. if runtime.GOOS != "windows" { daemon.Unmount(container) return fmt.Errorf("Error: driver %s is returning inconsistent paths for container %s ('%s' then '%s')", daemon.imageService.GraphDriverName(), container.ID, container.BaseFS, dir) } } container.BaseFS = dir // TODO: combine these fields return nil } // Unmount unsets the container base filesystem func (daemon *Daemon) Unmount(container *container.Container) error { if container.RWLayer == nil { return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil") } if err := container.RWLayer.Unmount(); err != nil { logrus.WithField("container", container.ID).WithError(err).Error("error unmounting container") return err } return nil } // Subnets return the IPv4 and IPv6 subnets of networks that are manager by Docker. func (daemon *Daemon) Subnets() ([]net.IPNet, []net.IPNet) { var v4Subnets []net.IPNet var v6Subnets []net.IPNet managedNetworks := daemon.netController.Networks() for _, managedNetwork := range managedNetworks { v4infos, v6infos := managedNetwork.Info().IpamInfo() for _, info := range v4infos { if info.IPAMData.Pool != nil { v4Subnets = append(v4Subnets, *info.IPAMData.Pool) } } for _, info := range v6infos { if info.IPAMData.Pool != nil { v6Subnets = append(v6Subnets, *info.IPAMData.Pool) } } } return v4Subnets, v6Subnets } // prepareTempDir prepares and returns the default directory to use // for temporary files. // If it doesn't exist, it is created. If it exists, its content is removed. func prepareTempDir(rootDir string) (string, error) { var tmpDir string if tmpDir = os.Getenv("DOCKER_TMPDIR"); tmpDir == "" { tmpDir = filepath.Join(rootDir, "tmp") newName := tmpDir + "-old" if err := os.Rename(tmpDir, newName); err == nil { go func() { if err := os.RemoveAll(newName); err != nil { logrus.Warnf("failed to delete old tmp directory: %s", newName) } }() } else if !os.IsNotExist(err) { logrus.Warnf("failed to rename %s for background deletion: %s. Deleting synchronously", tmpDir, err) if err := os.RemoveAll(tmpDir); err != nil { logrus.Warnf("failed to delete old tmp directory: %s", tmpDir) } } } return tmpDir, idtools.MkdirAllAndChown(tmpDir, 0700, idtools.CurrentIdentity()) } func (daemon *Daemon) setGenericResources(conf *config.Config) error { genericResources, err := config.ParseGenericResources(conf.NodeGenericResources) if err != nil { return err } daemon.genericResources = genericResources return nil } func setDefaultMtu(conf *config.Config) { // do nothing if the config does not have the default 0 value. if conf.Mtu != 0 { return } conf.Mtu = config.DefaultNetworkMtu } // IsShuttingDown tells whether the daemon is shutting down or not func (daemon *Daemon) IsShuttingDown() bool { return daemon.shutdown } // initDiscovery initializes the discovery watcher for this daemon. func (daemon *Daemon) initDiscovery(conf *config.Config) error { advertise, err := config.ParseClusterAdvertiseSettings(conf.ClusterStore, conf.ClusterAdvertise) if err != nil { if err == discovery.ErrDiscoveryDisabled { return nil } return err } conf.ClusterAdvertise = advertise discoveryWatcher, err := discovery.Init(conf.ClusterStore, conf.ClusterAdvertise, conf.ClusterOpts) if err != nil { return fmt.Errorf("discovery initialization failed (%v)", err) } daemon.discoveryWatcher = discoveryWatcher return nil } func isBridgeNetworkDisabled(conf *config.Config) bool { return conf.BridgeConfig.Iface == config.DisableNetworkBridge } func (daemon *Daemon) networkOptions(dconfig *config.Config, pg plugingetter.PluginGetter, activeSandboxes map[string]interface{}) ([]nwconfig.Option, error) { options := []nwconfig.Option{} if dconfig == nil { return options, nil } options = append(options, nwconfig.OptionExperimental(dconfig.Experimental)) options = append(options, nwconfig.OptionDataDir(dconfig.Root)) options = append(options, nwconfig.OptionExecRoot(dconfig.GetExecRoot())) dd := runconfig.DefaultDaemonNetworkMode() dn := runconfig.DefaultDaemonNetworkMode().NetworkName() options = append(options, nwconfig.OptionDefaultDriver(string(dd))) options = append(options, nwconfig.OptionDefaultNetwork(dn)) if strings.TrimSpace(dconfig.ClusterStore) != "" { kv := strings.Split(dconfig.ClusterStore, "://") if len(kv) != 2 { return nil, errors.New("kv store daemon config must be of the form KV-PROVIDER://KV-URL") } options = append(options, nwconfig.OptionKVProvider(kv[0])) options = append(options, nwconfig.OptionKVProviderURL(kv[1])) } if len(dconfig.ClusterOpts) > 0 { options = append(options, nwconfig.OptionKVOpts(dconfig.ClusterOpts)) } if daemon.discoveryWatcher != nil { options = append(options, nwconfig.OptionDiscoveryWatcher(daemon.discoveryWatcher)) } if dconfig.ClusterAdvertise != "" { options = append(options, nwconfig.OptionDiscoveryAddress(dconfig.ClusterAdvertise)) } options = append(options, nwconfig.OptionLabels(dconfig.Labels)) options = append(options, driverOptions(dconfig)...) if len(dconfig.NetworkConfig.DefaultAddressPools.Value()) > 0 { options = append(options, nwconfig.OptionDefaultAddressPoolConfig(dconfig.NetworkConfig.DefaultAddressPools.Value())) } if daemon.configStore != nil && daemon.configStore.LiveRestoreEnabled && len(activeSandboxes) != 0 { options = append(options, nwconfig.OptionActiveSandboxes(activeSandboxes)) } if pg != nil { options = append(options, nwconfig.OptionPluginGetter(pg)) } options = append(options, nwconfig.OptionNetworkControlPlaneMTU(dconfig.NetworkControlPlaneMTU)) return options, nil } // GetCluster returns the cluster func (daemon *Daemon) GetCluster() Cluster { return daemon.cluster } // SetCluster sets the cluster func (daemon *Daemon) SetCluster(cluster Cluster) { daemon.cluster = cluster } func (daemon *Daemon) pluginShutdown() { manager := daemon.pluginManager // Check for a valid manager object. In error conditions, daemon init can fail // and shutdown called, before plugin manager is initialized. if manager != nil { manager.Shutdown() } } // PluginManager returns current pluginManager associated with the daemon func (daemon *Daemon) PluginManager() *plugin.Manager { // set up before daemon to avoid this method return daemon.pluginManager } // PluginGetter returns current pluginStore associated with the daemon func (daemon *Daemon) PluginGetter() *plugin.Store { return daemon.PluginStore } // CreateDaemonRoot creates the root for the daemon func CreateDaemonRoot(config *config.Config) error { // get the canonical path to the Docker root directory var realRoot string if _, err := os.Stat(config.Root); err != nil && os.IsNotExist(err) { realRoot = config.Root } else { realRoot, err = fileutils.ReadSymlinkedDirectory(config.Root) if err != nil { return fmt.Errorf("Unable to get the full path to root (%s): %s", config.Root, err) } } idMapping, err := setupRemappedRoot(config) if err != nil { return err } return setupDaemonRoot(config, realRoot, idMapping.RootPair()) } // checkpointAndSave grabs a container lock to safely call container.CheckpointTo func (daemon *Daemon) checkpointAndSave(container *container.Container) error { container.Lock() defer container.Unlock() if err := container.CheckpointTo(daemon.containersReplica); err != nil { return fmt.Errorf("Error saving container state: %v", err) } return nil } // because the CLI sends a -1 when it wants to unset the swappiness value // we need to clear it on the server side func fixMemorySwappiness(resources *containertypes.Resources) { if resources.MemorySwappiness != nil && *resources.MemorySwappiness == -1 { resources.MemorySwappiness = nil } } // GetAttachmentStore returns current attachment store associated with the daemon func (daemon *Daemon) GetAttachmentStore() *network.AttachmentStore { return &daemon.attachmentStore } // IdentityMapping returns uid/gid mapping or a SID (in the case of Windows) for the builder func (daemon *Daemon) IdentityMapping() *idtools.IdentityMapping { return daemon.idMapping } // ImageService returns the Daemon's ImageService func (daemon *Daemon) ImageService() *images.ImageService { return daemon.imageService } // BuilderBackend returns the backend used by builder func (daemon *Daemon) BuilderBackend() builder.Backend { return struct { *Daemon *images.ImageService }{daemon, daemon.imageService} }
// Package daemon exposes the functions that occur on the host server // that the Docker daemon is running. // // In implementing the various functions of the daemon, there is often // a method-specific struct for configuring the runtime behavior. package daemon // import "github.com/docker/docker/daemon" import ( "context" "fmt" "io/ioutil" "net" "net/url" "os" "path" "path/filepath" "runtime" "strings" "sync" "time" "github.com/containerd/containerd" "github.com/containerd/containerd/defaults" "github.com/containerd/containerd/pkg/dialer" "github.com/containerd/containerd/pkg/userns" "github.com/containerd/containerd/remotes/docker" "github.com/docker/docker/api/types" containertypes "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/swarm" "github.com/docker/docker/builder" "github.com/docker/docker/container" "github.com/docker/docker/daemon/config" "github.com/docker/docker/daemon/discovery" "github.com/docker/docker/daemon/events" "github.com/docker/docker/daemon/exec" _ "github.com/docker/docker/daemon/graphdriver/register" // register graph drivers "github.com/docker/docker/daemon/images" "github.com/docker/docker/daemon/logger" "github.com/docker/docker/daemon/network" "github.com/docker/docker/daemon/stats" dmetadata "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/dockerversion" "github.com/docker/docker/errdefs" "github.com/docker/docker/image" "github.com/docker/docker/layer" "github.com/docker/docker/libcontainerd" libcontainerdtypes "github.com/docker/docker/libcontainerd/types" "github.com/docker/docker/libnetwork" "github.com/docker/docker/libnetwork/cluster" nwconfig "github.com/docker/docker/libnetwork/config" "github.com/docker/docker/pkg/fileutils" "github.com/docker/docker/pkg/idtools" "github.com/docker/docker/pkg/plugingetter" "github.com/docker/docker/pkg/system" "github.com/docker/docker/pkg/truncindex" "github.com/docker/docker/plugin" pluginexec "github.com/docker/docker/plugin/executor/containerd" refstore "github.com/docker/docker/reference" "github.com/docker/docker/registry" "github.com/docker/docker/runconfig" volumesservice "github.com/docker/docker/volume/service" "github.com/moby/buildkit/util/resolver" "github.com/moby/locker" "github.com/pkg/errors" "github.com/sirupsen/logrus" "go.etcd.io/bbolt" "golang.org/x/sync/semaphore" "golang.org/x/sync/singleflight" "google.golang.org/grpc" "google.golang.org/grpc/backoff" ) // ContainersNamespace is the name of the namespace used for users containers const ( ContainersNamespace = "moby" ) var ( errSystemNotSupported = errors.New("the Docker daemon is not supported on this platform") ) // Daemon holds information about the Docker daemon. type Daemon struct { ID string repository string containers container.Store containersReplica container.ViewDB execCommands *exec.Store imageService *images.ImageService idIndex *truncindex.TruncIndex configStore *config.Config statsCollector *stats.Collector defaultLogConfig containertypes.LogConfig RegistryService registry.Service EventsService *events.Events netController libnetwork.NetworkController volumes *volumesservice.VolumesService discoveryWatcher discovery.Reloader root string seccompEnabled bool apparmorEnabled bool shutdown bool idMapping *idtools.IdentityMapping graphDriver string // TODO: move graphDriver field to an InfoService PluginStore *plugin.Store // TODO: remove pluginManager *plugin.Manager linkIndex *linkIndex containerdCli *containerd.Client containerd libcontainerdtypes.Client defaultIsolation containertypes.Isolation // Default isolation mode on Windows clusterProvider cluster.Provider cluster Cluster genericResources []swarm.GenericResource metricsPluginListener net.Listener machineMemory uint64 seccompProfile []byte seccompProfilePath string usage singleflight.Group pruneRunning int32 hosts map[string]bool // hosts stores the addresses the daemon is listening on startupDone chan struct{} attachmentStore network.AttachmentStore attachableNetworkLock *locker.Locker // This is used for Windows which doesn't currently support running on containerd // It stores metadata for the content store (used for manifest caching) // This needs to be closed on daemon exit mdDB *bbolt.DB } // StoreHosts stores the addresses the daemon is listening on func (daemon *Daemon) StoreHosts(hosts []string) { if daemon.hosts == nil { daemon.hosts = make(map[string]bool) } for _, h := range hosts { daemon.hosts[h] = true } } // HasExperimental returns whether the experimental features of the daemon are enabled or not func (daemon *Daemon) HasExperimental() bool { return daemon.configStore != nil && daemon.configStore.Experimental } // Features returns the features map from configStore func (daemon *Daemon) Features() *map[string]bool { return &daemon.configStore.Features } // RegistryHosts returns registry configuration in containerd resolvers format func (daemon *Daemon) RegistryHosts() docker.RegistryHosts { var ( registryKey = "docker.io" mirrors = make([]string, len(daemon.configStore.Mirrors)) m = map[string]resolver.RegistryConfig{} ) // must trim "https://" or "http://" prefix for i, v := range daemon.configStore.Mirrors { if uri, err := url.Parse(v); err == nil { v = uri.Host } mirrors[i] = v } // set mirrors for default registry m[registryKey] = resolver.RegistryConfig{Mirrors: mirrors} for _, v := range daemon.configStore.InsecureRegistries { u, err := url.Parse(v) c := resolver.RegistryConfig{} if err == nil { v = u.Host t := true if u.Scheme == "http" { c.PlainHTTP = &t } else { c.Insecure = &t } } m[v] = c } for k, v := range m { if d, err := registry.HostCertsDir(k); err == nil { v.TLSConfigDir = []string{d} m[k] = v } } certsDir := registry.CertsDir() if fis, err := ioutil.ReadDir(certsDir); err == nil { for _, fi := range fis { if _, ok := m[fi.Name()]; !ok { m[fi.Name()] = resolver.RegistryConfig{ TLSConfigDir: []string{filepath.Join(certsDir, fi.Name())}, } } } } return resolver.NewRegistryConfig(m) } func (daemon *Daemon) restore() error { var mapLock sync.Mutex containers := make(map[string]*container.Container) logrus.Info("Loading containers: start.") dir, err := ioutil.ReadDir(daemon.repository) if err != nil { return err } // parallelLimit is the maximum number of parallel startup jobs that we // allow (this is the limited used for all startup semaphores). The multipler // (128) was chosen after some fairly significant benchmarking -- don't change // it unless you've tested it significantly (this value is adjusted if // RLIMIT_NOFILE is small to avoid EMFILE). parallelLimit := adjustParallelLimit(len(dir), 128*runtime.NumCPU()) // Re-used for all parallel startup jobs. var group sync.WaitGroup sem := semaphore.NewWeighted(int64(parallelLimit)) for _, v := range dir { group.Add(1) go func(id string) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", id) c, err := daemon.load(id) if err != nil { log.WithError(err).Error("failed to load container") return } if !system.IsOSSupported(c.OS) { log.Errorf("failed to load container: %s (%q)", system.ErrNotSupportedOperatingSystem, c.OS) return } // Ignore the container if it does not support the current driver being used by the graph if (c.Driver == "" && daemon.graphDriver == "aufs") || c.Driver == daemon.graphDriver { rwlayer, err := daemon.imageService.GetLayerByID(c.ID) if err != nil { log.WithError(err).Error("failed to load container mount") return } c.RWLayer = rwlayer log.WithFields(logrus.Fields{ "running": c.IsRunning(), "paused": c.IsPaused(), }).Debug("loaded container") mapLock.Lock() containers[c.ID] = c mapLock.Unlock() } else { log.Debugf("cannot load container because it was created with another storage driver") } }(v.Name()) } group.Wait() removeContainers := make(map[string]*container.Container) restartContainers := make(map[*container.Container]chan struct{}) activeSandboxes := make(map[string]interface{}) for _, c := range containers { group.Add(1) go func(c *container.Container) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", c.ID) if err := daemon.registerName(c); err != nil { log.WithError(err).Errorf("failed to register container name: %s", c.Name) mapLock.Lock() delete(containers, c.ID) mapLock.Unlock() return } if err := daemon.Register(c); err != nil { log.WithError(err).Error("failed to register container") mapLock.Lock() delete(containers, c.ID) mapLock.Unlock() return } }(c) } group.Wait() for _, c := range containers { group.Add(1) go func(c *container.Container) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", c.ID) daemon.backportMountSpec(c) if err := daemon.checkpointAndSave(c); err != nil { log.WithError(err).Error("error saving backported mountspec to disk") } daemon.setStateCounter(c) logger := func(c *container.Container) *logrus.Entry { return log.WithFields(logrus.Fields{ "running": c.IsRunning(), "paused": c.IsPaused(), "restarting": c.IsRestarting(), }) } logger(c).Debug("restoring container") var ( err error alive bool ec uint32 exitedAt time.Time process libcontainerdtypes.Process ) alive, _, process, err = daemon.containerd.Restore(context.Background(), c.ID, c.InitializeStdio) if err != nil && !errdefs.IsNotFound(err) { logger(c).WithError(err).Error("failed to restore container with containerd") return } logger(c).Debugf("alive: %v", alive) if !alive { // If process is not nil, cleanup dead container from containerd. // If process is nil then the above `containerd.Restore` returned an errdefs.NotFoundError, // and docker's view of the container state will be updated accorrdingly via SetStopped further down. if process != nil { logger(c).Debug("cleaning up dead container process") ec, exitedAt, err = process.Delete(context.Background()) if err != nil && !errdefs.IsNotFound(err) { logger(c).WithError(err).Error("failed to delete container from containerd") return } } } else if !daemon.configStore.LiveRestoreEnabled { logger(c).Debug("shutting down container considered alive by containerd") if err := daemon.shutdownContainer(c); err != nil && !errdefs.IsNotFound(err) { log.WithError(err).Error("error shutting down container") return } c.ResetRestartManager(false) } if c.IsRunning() || c.IsPaused() { logger(c).Debug("syncing container on disk state with real state") c.RestartManager().Cancel() // manually start containers because some need to wait for swarm networking if c.IsPaused() && alive { s, err := daemon.containerd.Status(context.Background(), c.ID) if err != nil { logger(c).WithError(err).Error("failed to get container status") } else { logger(c).WithField("state", s).Info("restored container paused") switch s { case containerd.Paused, containerd.Pausing: // nothing to do case containerd.Stopped: alive = false case containerd.Unknown: log.Error("unknown status for paused container during restore") default: // running c.Lock() c.Paused = false daemon.setStateCounter(c) if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update paused container state") } c.Unlock() } } } if !alive { logger(c).Debug("setting stopped state") c.Lock() c.SetStopped(&container.ExitStatus{ExitCode: int(ec), ExitedAt: exitedAt}) daemon.Cleanup(c) if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update stopped container state") } c.Unlock() logger(c).Debug("set stopped state") } // we call Mount and then Unmount to get BaseFs of the container if err := daemon.Mount(c); err != nil { // The mount is unlikely to fail. However, in case mount fails // the container should be allowed to restore here. Some functionalities // (like docker exec -u user) might be missing but container is able to be // stopped/restarted/removed. // See #29365 for related information. // The error is only logged here. logger(c).WithError(err).Warn("failed to mount container to get BaseFs path") } else { if err := daemon.Unmount(c); err != nil { logger(c).WithError(err).Warn("failed to umount container to get BaseFs path") } } c.ResetRestartManager(false) if !c.HostConfig.NetworkMode.IsContainer() && c.IsRunning() { options, err := daemon.buildSandboxOptions(c) if err != nil { logger(c).WithError(err).Warn("failed to build sandbox option to restore container") } mapLock.Lock() activeSandboxes[c.NetworkSettings.SandboxID] = options mapLock.Unlock() } } // get list of containers we need to restart // Do not autostart containers which // has endpoints in a swarm scope // network yet since the cluster is // not initialized yet. We will start // it after the cluster is // initialized. if daemon.configStore.AutoRestart && c.ShouldRestart() && !c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore { mapLock.Lock() restartContainers[c] = make(chan struct{}) mapLock.Unlock() } else if c.HostConfig != nil && c.HostConfig.AutoRemove { mapLock.Lock() removeContainers[c.ID] = c mapLock.Unlock() } c.Lock() if c.RemovalInProgress { // We probably crashed in the middle of a removal, reset // the flag. // // We DO NOT remove the container here as we do not // know if the user had requested for either the // associated volumes, network links or both to also // be removed. So we put the container in the "dead" // state and leave further processing up to them. c.RemovalInProgress = false c.Dead = true if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update RemovalInProgress container state") } else { log.Debugf("reset RemovalInProgress state for container") } } c.Unlock() logger(c).Debug("done restoring container") }(c) } group.Wait() daemon.netController, err = daemon.initNetworkController(daemon.configStore, activeSandboxes) if err != nil { return fmt.Errorf("Error initializing network controller: %v", err) } // Now that all the containers are registered, register the links for _, c := range containers { group.Add(1) go func(c *container.Container) { _ = sem.Acquire(context.Background(), 1) if err := daemon.registerLinks(c, c.HostConfig); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to register link for container") } sem.Release(1) group.Done() }(c) } group.Wait() for c, notifier := range restartContainers { group.Add(1) go func(c *container.Container, chNotify chan struct{}) { _ = sem.Acquire(context.Background(), 1) log := logrus.WithField("container", c.ID) log.Debug("starting container") // ignore errors here as this is a best effort to wait for children to be // running before we try to start the container children := daemon.children(c) timeout := time.NewTimer(5 * time.Second) defer timeout.Stop() for _, child := range children { if notifier, exists := restartContainers[child]; exists { select { case <-notifier: case <-timeout.C: } } } // Make sure networks are available before starting daemon.waitForNetworks(c) if err := daemon.containerStart(c, "", "", true); err != nil { log.WithError(err).Error("failed to start container") } close(chNotify) sem.Release(1) group.Done() }(c, notifier) } group.Wait() for id := range removeContainers { group.Add(1) go func(cid string) { _ = sem.Acquire(context.Background(), 1) if err := daemon.ContainerRm(cid, &types.ContainerRmConfig{ForceRemove: true, RemoveVolume: true}); err != nil { logrus.WithField("container", cid).WithError(err).Error("failed to remove container") } sem.Release(1) group.Done() }(id) } group.Wait() // any containers that were started above would already have had this done, // however we need to now prepare the mountpoints for the rest of the containers as well. // This shouldn't cause any issue running on the containers that already had this run. // This must be run after any containers with a restart policy so that containerized plugins // can have a chance to be running before we try to initialize them. for _, c := range containers { // if the container has restart policy, do not // prepare the mountpoints since it has been done on restarting. // This is to speed up the daemon start when a restart container // has a volume and the volume driver is not available. if _, ok := restartContainers[c]; ok { continue } else if _, ok := removeContainers[c.ID]; ok { // container is automatically removed, skip it. continue } group.Add(1) go func(c *container.Container) { _ = sem.Acquire(context.Background(), 1) if err := daemon.prepareMountPoints(c); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to prepare mountpoints for container") } sem.Release(1) group.Done() }(c) } group.Wait() logrus.Info("Loading containers: done.") return nil } // RestartSwarmContainers restarts any autostart container which has a // swarm endpoint. func (daemon *Daemon) RestartSwarmContainers() { ctx := context.Background() // parallelLimit is the maximum number of parallel startup jobs that we // allow (this is the limited used for all startup semaphores). The multipler // (128) was chosen after some fairly significant benchmarking -- don't change // it unless you've tested it significantly (this value is adjusted if // RLIMIT_NOFILE is small to avoid EMFILE). parallelLimit := adjustParallelLimit(len(daemon.List()), 128*runtime.NumCPU()) var group sync.WaitGroup sem := semaphore.NewWeighted(int64(parallelLimit)) for _, c := range daemon.List() { if !c.IsRunning() && !c.IsPaused() { // Autostart all the containers which has a // swarm endpoint now that the cluster is // initialized. if daemon.configStore.AutoRestart && c.ShouldRestart() && c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore { group.Add(1) go func(c *container.Container) { if err := sem.Acquire(ctx, 1); err != nil { // ctx is done. group.Done() return } if err := daemon.containerStart(c, "", "", true); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to start swarm container") } sem.Release(1) group.Done() }(c) } } } group.Wait() } // waitForNetworks is used during daemon initialization when starting up containers // It ensures that all of a container's networks are available before the daemon tries to start the container. // In practice it just makes sure the discovery service is available for containers which use a network that require discovery. func (daemon *Daemon) waitForNetworks(c *container.Container) { if daemon.discoveryWatcher == nil { return } // Make sure if the container has a network that requires discovery that the discovery service is available before starting for netName := range c.NetworkSettings.Networks { // If we get `ErrNoSuchNetwork` here, we can assume that it is due to discovery not being ready // Most likely this is because the K/V store used for discovery is in a container and needs to be started if _, err := daemon.netController.NetworkByName(netName); err != nil { if _, ok := err.(libnetwork.ErrNoSuchNetwork); !ok { continue } // use a longish timeout here due to some slowdowns in libnetwork if the k/v store is on anything other than --net=host // FIXME: why is this slow??? dur := 60 * time.Second timer := time.NewTimer(dur) logrus.WithField("container", c.ID).Debugf("Container %s waiting for network to be ready", c.Name) select { case <-daemon.discoveryWatcher.ReadyCh(): case <-timer.C: } timer.Stop() return } } } func (daemon *Daemon) children(c *container.Container) map[string]*container.Container { return daemon.linkIndex.children(c) } // parents returns the names of the parent containers of the container // with the given name. func (daemon *Daemon) parents(c *container.Container) map[string]*container.Container { return daemon.linkIndex.parents(c) } func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error { fullName := path.Join(parent.Name, alias) if err := daemon.containersReplica.ReserveName(fullName, child.ID); err != nil { if err == container.ErrNameReserved { logrus.Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err) return nil } return err } daemon.linkIndex.link(parent, child, fullName) return nil } // DaemonJoinsCluster informs the daemon has joined the cluster and provides // the handler to query the cluster component func (daemon *Daemon) DaemonJoinsCluster(clusterProvider cluster.Provider) { daemon.setClusterProvider(clusterProvider) } // DaemonLeavesCluster informs the daemon has left the cluster func (daemon *Daemon) DaemonLeavesCluster() { // Daemon is in charge of removing the attachable networks with // connected containers when the node leaves the swarm daemon.clearAttachableNetworks() // We no longer need the cluster provider, stop it now so that // the network agent will stop listening to cluster events. daemon.setClusterProvider(nil) // Wait for the networking cluster agent to stop daemon.netController.AgentStopWait() // Daemon is in charge of removing the ingress network when the // node leaves the swarm. Wait for job to be done or timeout. // This is called also on graceful daemon shutdown. We need to // wait, because the ingress release has to happen before the // network controller is stopped. if done, err := daemon.ReleaseIngress(); err == nil { timeout := time.NewTimer(5 * time.Second) defer timeout.Stop() select { case <-done: case <-timeout.C: logrus.Warn("timeout while waiting for ingress network removal") } } else { logrus.Warnf("failed to initiate ingress network removal: %v", err) } daemon.attachmentStore.ClearAttachments() } // setClusterProvider sets a component for querying the current cluster state. func (daemon *Daemon) setClusterProvider(clusterProvider cluster.Provider) { daemon.clusterProvider = clusterProvider daemon.netController.SetClusterProvider(clusterProvider) daemon.attachableNetworkLock = locker.New() } // IsSwarmCompatible verifies if the current daemon // configuration is compatible with the swarm mode func (daemon *Daemon) IsSwarmCompatible() error { if daemon.configStore == nil { return nil } return daemon.configStore.IsSwarmCompatible() } // NewDaemon sets up everything for the daemon to be able to service // requests from the webserver. func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.Store) (daemon *Daemon, err error) { setDefaultMtu(config) registryService, err := registry.NewService(config.ServiceOptions) if err != nil { return nil, err } // Ensure that we have a correct root key limit for launching containers. if err := modifyRootKeyLimit(); err != nil { logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err) } // Ensure we have compatible and valid configuration options if err := verifyDaemonSettings(config); err != nil { return nil, err } // Do we have a disabled network? config.DisableBridge = isBridgeNetworkDisabled(config) // Setup the resolv.conf setupResolvConf(config) // Verify the platform is supported as a daemon if !platformSupported { return nil, errSystemNotSupported } // Validate platform-specific requirements if err := checkSystem(); err != nil { return nil, err } idMapping, err := setupRemappedRoot(config) if err != nil { return nil, err } rootIDs := idMapping.RootPair() if err := setupDaemonProcess(config); err != nil { return nil, err } // set up the tmpDir to use a canonical path tmp, err := prepareTempDir(config.Root) if err != nil { return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err) } realTmp, err := fileutils.ReadSymlinkedDirectory(tmp) if err != nil { return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err) } if isWindows { if _, err := os.Stat(realTmp); err != nil && os.IsNotExist(err) { if err := system.MkdirAll(realTmp, 0700); err != nil { return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err) } } os.Setenv("TEMP", realTmp) os.Setenv("TMP", realTmp) } else { os.Setenv("TMPDIR", realTmp) } d := &Daemon{ configStore: config, PluginStore: pluginStore, startupDone: make(chan struct{}), } // Ensure the daemon is properly shutdown if there is a failure during // initialization defer func() { if err != nil { if err := d.Shutdown(); err != nil { logrus.Error(err) } } }() if err := d.setGenericResources(config); err != nil { return nil, err } // set up SIGUSR1 handler on Unix-like systems, or a Win32 global event // on Windows to dump Go routine stacks stackDumpDir := config.Root if execRoot := config.GetExecRoot(); execRoot != "" { stackDumpDir = execRoot } d.setupDumpStackTrap(stackDumpDir) if err := d.setupSeccompProfile(); err != nil { return nil, err } // Set the default isolation mode (only applicable on Windows) if err := d.setDefaultIsolation(); err != nil { return nil, fmt.Errorf("error setting default isolation mode: %v", err) } if err := configureMaxThreads(config); err != nil { logrus.Warnf("Failed to configure golang's threads limit: %v", err) } // ensureDefaultAppArmorProfile does nothing if apparmor is disabled if err := ensureDefaultAppArmorProfile(); err != nil { logrus.Errorf(err.Error()) } daemonRepo := filepath.Join(config.Root, "containers") if err := idtools.MkdirAllAndChown(daemonRepo, 0701, idtools.CurrentIdentity()); err != nil { return nil, err } // Create the directory where we'll store the runtime scripts (i.e. in // order to support runtimeArgs) daemonRuntimes := filepath.Join(config.Root, "runtimes") if err := system.MkdirAll(daemonRuntimes, 0700); err != nil { return nil, err } if err := d.loadRuntimes(); err != nil { return nil, err } if isWindows { if err := system.MkdirAll(filepath.Join(config.Root, "credentialspecs"), 0); err != nil { return nil, err } } if isWindows { // On Windows we don't support the environment variable, or a user supplied graphdriver d.graphDriver = "windowsfilter" } else { // Unix platforms however run a single graphdriver for all containers, and it can // be set through an environment variable, a daemon start parameter, or chosen through // initialization of the layerstore through driver priority order for example. if drv := os.Getenv("DOCKER_DRIVER"); drv != "" { d.graphDriver = drv logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", drv) } else { d.graphDriver = config.GraphDriver // May still be empty. Layerstore init determines instead. } } d.RegistryService = registryService logger.RegisterPluginGetter(d.PluginStore) metricsSockPath, err := d.listenMetricsSock() if err != nil { return nil, err } registerMetricsPluginCallback(d.PluginStore, metricsSockPath) backoffConfig := backoff.DefaultConfig backoffConfig.MaxDelay = 3 * time.Second connParams := grpc.ConnectParams{ Backoff: backoffConfig, } gopts := []grpc.DialOption{ // WithBlock makes sure that the following containerd request // is reliable. // // NOTE: In one edge case with high load pressure, kernel kills // dockerd, containerd and containerd-shims caused by OOM. // When both dockerd and containerd restart, but containerd // will take time to recover all the existing containers. Before // containerd serving, dockerd will failed with gRPC error. // That bad thing is that restore action will still ignore the // any non-NotFound errors and returns running state for // already stopped container. It is unexpected behavior. And // we need to restart dockerd to make sure that anything is OK. // // It is painful. Add WithBlock can prevent the edge case. And // n common case, the containerd will be serving in shortly. // It is not harm to add WithBlock for containerd connection. grpc.WithBlock(), grpc.WithInsecure(), grpc.WithConnectParams(connParams), grpc.WithContextDialer(dialer.ContextDialer), // TODO(stevvooe): We may need to allow configuration of this on the client. grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)), grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)), } if config.ContainerdAddr != "" { d.containerdCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second)) if err != nil { return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr) } } createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) { var pluginCli *containerd.Client // Windows is not currently using containerd, keep the // client as nil if config.ContainerdAddr != "" { pluginCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdPluginNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second)) if err != nil { return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr) } } var rt types.Runtime if runtime.GOOS != "windows" { rtPtr, err := d.getRuntime(config.GetDefaultRuntimeName()) if err != nil { return nil, err } rt = *rtPtr } return pluginexec.New(ctx, getPluginExecRoot(config.Root), pluginCli, config.ContainerdPluginNamespace, m, rt) } // Plugin system initialization should happen before restore. Do not change order. d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{ Root: filepath.Join(config.Root, "plugins"), ExecRoot: getPluginExecRoot(config.Root), Store: d.PluginStore, CreateExecutor: createPluginExec, RegistryService: registryService, LiveRestoreEnabled: config.LiveRestoreEnabled, LogPluginEvent: d.LogPluginEvent, // todo: make private AuthzMiddleware: config.AuthzMiddleware, }) if err != nil { return nil, errors.Wrap(err, "couldn't create plugin manager") } if err := d.setupDefaultLogConfig(); err != nil { return nil, err } layerStore, err := layer.NewStoreFromOptions(layer.StoreOptions{ Root: config.Root, MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"), GraphDriver: d.graphDriver, GraphDriverOptions: config.GraphOptions, IDMapping: idMapping, PluginGetter: d.PluginStore, ExperimentalEnabled: config.Experimental, OS: runtime.GOOS, }) if err != nil { return nil, err } // As layerstore initialization may set the driver d.graphDriver = layerStore.DriverName() // Configure and validate the kernels security support. Note this is a Linux/FreeBSD // operation only, so it is safe to pass *just* the runtime OS graphdriver. if err := configureKernelSecuritySupport(config, d.graphDriver); err != nil { return nil, err } imageRoot := filepath.Join(config.Root, "image", d.graphDriver) ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb")) if err != nil { return nil, err } imageStore, err := image.NewImageStore(ifs, layerStore) if err != nil { return nil, err } d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d) if err != nil { return nil, err } trustKey, err := loadOrCreateTrustKey(config.TrustKeyPath) if err != nil { return nil, err } trustDir := filepath.Join(config.Root, "trust") if err := system.MkdirAll(trustDir, 0700); err != nil { return nil, err } // We have a single tag/reference store for the daemon globally. However, it's // stored under the graphdriver. On host platforms which only support a single // container OS, but multiple selectable graphdrivers, this means depending on which // graphdriver is chosen, the global reference store is under there. For // platforms which support multiple container operating systems, this is slightly // more problematic as where does the global ref store get located? Fortunately, // for Windows, which is currently the only daemon supporting multiple container // operating systems, the list of graphdrivers available isn't user configurable. // For backwards compatibility, we just put it under the windowsfilter // directory regardless. refStoreLocation := filepath.Join(imageRoot, `repositories.json`) rs, err := refstore.NewReferenceStore(refStoreLocation) if err != nil { return nil, fmt.Errorf("Couldn't create reference store repository: %s", err) } distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution")) if err != nil { return nil, err } // Discovery is only enabled when the daemon is launched with an address to advertise. When // initialized, the daemon is registered and we can store the discovery backend as it's read-only if err := d.initDiscovery(config); err != nil { return nil, err } sysInfo := d.RawSysInfo() for _, w := range sysInfo.Warnings { logrus.Warn(w) } // Check if Devices cgroup is mounted, it is hard requirement for container security, // on Linux. if runtime.GOOS == "linux" && !sysInfo.CgroupDevicesEnabled && !userns.RunningInUserNS() { return nil, errors.New("Devices cgroup isn't mounted") } d.ID = trustKey.PublicKey().KeyID() d.repository = daemonRepo d.containers = container.NewMemoryStore() if d.containersReplica, err = container.NewViewDB(); err != nil { return nil, err } d.execCommands = exec.NewStore() d.idIndex = truncindex.NewTruncIndex([]string{}) d.statsCollector = d.newStatsCollector(1 * time.Second) d.EventsService = events.New() d.root = config.Root d.idMapping = idMapping d.seccompEnabled = sysInfo.Seccomp d.apparmorEnabled = sysInfo.AppArmor d.linkIndex = newLinkIndex() imgSvcConfig := images.ImageServiceConfig{ ContainerStore: d.containers, DistributionMetadataStore: distributionMetadataStore, EventsService: d.EventsService, ImageStore: imageStore, LayerStore: layerStore, MaxConcurrentDownloads: *config.MaxConcurrentDownloads, MaxConcurrentUploads: *config.MaxConcurrentUploads, MaxDownloadAttempts: *config.MaxDownloadAttempts, ReferenceStore: rs, RegistryService: registryService, TrustKey: trustKey, ContentNamespace: config.ContainerdNamespace, } // containerd is not currently supported with Windows. // So sometimes d.containerdCli will be nil // In that case we'll create a local content store... but otherwise we'll use containerd if d.containerdCli != nil { imgSvcConfig.Leases = d.containerdCli.LeasesService() imgSvcConfig.ContentStore = d.containerdCli.ContentStore() } else { cs, lm, err := d.configureLocalContentStore() if err != nil { return nil, err } imgSvcConfig.ContentStore = cs imgSvcConfig.Leases = lm } // TODO: imageStore, distributionMetadataStore, and ReferenceStore are only // used above to run migration. They could be initialized in ImageService // if migration is called from daemon/images. layerStore might move as well. d.imageService = images.NewImageService(imgSvcConfig) go d.execCommandGC() d.containerd, err = libcontainerd.NewClient(ctx, d.containerdCli, filepath.Join(config.ExecRoot, "containerd"), config.ContainerdNamespace, d) if err != nil { return nil, err } if err := d.restore(); err != nil { return nil, err } close(d.startupDone) info := d.SystemInfo() engineInfo.WithValues( dockerversion.Version, dockerversion.GitCommit, info.Architecture, info.Driver, info.KernelVersion, info.OperatingSystem, info.OSType, info.OSVersion, info.ID, ).Set(1) engineCpus.Set(float64(info.NCPU)) engineMemory.Set(float64(info.MemTotal)) logrus.WithFields(logrus.Fields{ "version": dockerversion.Version, "commit": dockerversion.GitCommit, "graphdriver": d.graphDriver, }).Info("Docker daemon") return d, nil } // DistributionServices returns services controlling daemon storage func (daemon *Daemon) DistributionServices() images.DistributionServices { return daemon.imageService.DistributionServices() } func (daemon *Daemon) waitForStartupDone() { <-daemon.startupDone } func (daemon *Daemon) shutdownContainer(c *container.Container) error { stopTimeout := c.StopTimeout() // If container failed to exit in stopTimeout seconds of SIGTERM, then using the force if err := daemon.containerStop(c, stopTimeout); err != nil { return fmt.Errorf("Failed to stop container %s with error: %v", c.ID, err) } // Wait without timeout for the container to exit. // Ignore the result. <-c.Wait(context.Background(), container.WaitConditionNotRunning) return nil } // ShutdownTimeout returns the timeout (in seconds) before containers are forcibly // killed during shutdown. The default timeout can be configured both on the daemon // and per container, and the longest timeout will be used. A grace-period of // 5 seconds is added to the configured timeout. // // A negative (-1) timeout means "indefinitely", which means that containers // are not forcibly killed, and the daemon shuts down after all containers exit. func (daemon *Daemon) ShutdownTimeout() int { shutdownTimeout := daemon.configStore.ShutdownTimeout if shutdownTimeout < 0 { return -1 } if daemon.containers == nil { return shutdownTimeout } graceTimeout := 5 for _, c := range daemon.containers.List() { stopTimeout := c.StopTimeout() if stopTimeout < 0 { return -1 } if stopTimeout+graceTimeout > shutdownTimeout { shutdownTimeout = stopTimeout + graceTimeout } } return shutdownTimeout } // Shutdown stops the daemon. func (daemon *Daemon) Shutdown() error { daemon.shutdown = true // Keep mounts and networking running on daemon shutdown if // we are to keep containers running and restore them. if daemon.configStore.LiveRestoreEnabled && daemon.containers != nil { // check if there are any running containers, if none we should do some cleanup if ls, err := daemon.Containers(&types.ContainerListOptions{}); len(ls) != 0 || err != nil { // metrics plugins still need some cleanup daemon.cleanupMetricsPlugins() return nil } } if daemon.containers != nil { logrus.Debugf("daemon configured with a %d seconds minimum shutdown timeout", daemon.configStore.ShutdownTimeout) logrus.Debugf("start clean shutdown of all containers with a %d seconds timeout...", daemon.ShutdownTimeout()) daemon.containers.ApplyAll(func(c *container.Container) { if !c.IsRunning() { return } log := logrus.WithField("container", c.ID) log.Debug("shutting down container") if err := daemon.shutdownContainer(c); err != nil { log.WithError(err).Error("failed to shut down container") return } if mountid, err := daemon.imageService.GetLayerMountID(c.ID); err == nil { daemon.cleanupMountsByID(mountid) } log.Debugf("shut down container") }) } if daemon.volumes != nil { if err := daemon.volumes.Shutdown(); err != nil { logrus.Errorf("Error shutting down volume store: %v", err) } } if daemon.imageService != nil { daemon.imageService.Cleanup() } // If we are part of a cluster, clean up cluster's stuff if daemon.clusterProvider != nil { logrus.Debugf("start clean shutdown of cluster resources...") daemon.DaemonLeavesCluster() } daemon.cleanupMetricsPlugins() // Shutdown plugins after containers and layerstore. Don't change the order. daemon.pluginShutdown() // trigger libnetwork Stop only if it's initialized if daemon.netController != nil { daemon.netController.Stop() } if daemon.containerdCli != nil { daemon.containerdCli.Close() } if daemon.mdDB != nil { daemon.mdDB.Close() } return daemon.cleanupMounts() } // Mount sets container.BaseFS // (is it not set coming in? why is it unset?) func (daemon *Daemon) Mount(container *container.Container) error { if container.RWLayer == nil { return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil") } dir, err := container.RWLayer.Mount(container.GetMountLabel()) if err != nil { return err } logrus.WithField("container", container.ID).Debugf("container mounted via layerStore: %v", dir) if container.BaseFS != nil && container.BaseFS.Path() != dir.Path() { // The mount path reported by the graph driver should always be trusted on Windows, since the // volume path for a given mounted layer may change over time. This should only be an error // on non-Windows operating systems. if runtime.GOOS != "windows" { daemon.Unmount(container) return fmt.Errorf("Error: driver %s is returning inconsistent paths for container %s ('%s' then '%s')", daemon.imageService.GraphDriverName(), container.ID, container.BaseFS, dir) } } container.BaseFS = dir // TODO: combine these fields return nil } // Unmount unsets the container base filesystem func (daemon *Daemon) Unmount(container *container.Container) error { if container.RWLayer == nil { return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil") } if err := container.RWLayer.Unmount(); err != nil { logrus.WithField("container", container.ID).WithError(err).Error("error unmounting container") return err } return nil } // Subnets return the IPv4 and IPv6 subnets of networks that are manager by Docker. func (daemon *Daemon) Subnets() ([]net.IPNet, []net.IPNet) { var v4Subnets []net.IPNet var v6Subnets []net.IPNet managedNetworks := daemon.netController.Networks() for _, managedNetwork := range managedNetworks { v4infos, v6infos := managedNetwork.Info().IpamInfo() for _, info := range v4infos { if info.IPAMData.Pool != nil { v4Subnets = append(v4Subnets, *info.IPAMData.Pool) } } for _, info := range v6infos { if info.IPAMData.Pool != nil { v6Subnets = append(v6Subnets, *info.IPAMData.Pool) } } } return v4Subnets, v6Subnets } // prepareTempDir prepares and returns the default directory to use // for temporary files. // If it doesn't exist, it is created. If it exists, its content is removed. func prepareTempDir(rootDir string) (string, error) { var tmpDir string if tmpDir = os.Getenv("DOCKER_TMPDIR"); tmpDir == "" { tmpDir = filepath.Join(rootDir, "tmp") newName := tmpDir + "-old" if err := os.Rename(tmpDir, newName); err == nil { go func() { if err := os.RemoveAll(newName); err != nil { logrus.Warnf("failed to delete old tmp directory: %s", newName) } }() } else if !os.IsNotExist(err) { logrus.Warnf("failed to rename %s for background deletion: %s. Deleting synchronously", tmpDir, err) if err := os.RemoveAll(tmpDir); err != nil { logrus.Warnf("failed to delete old tmp directory: %s", tmpDir) } } } return tmpDir, idtools.MkdirAllAndChown(tmpDir, 0700, idtools.CurrentIdentity()) } func (daemon *Daemon) setGenericResources(conf *config.Config) error { genericResources, err := config.ParseGenericResources(conf.NodeGenericResources) if err != nil { return err } daemon.genericResources = genericResources return nil } func setDefaultMtu(conf *config.Config) { // do nothing if the config does not have the default 0 value. if conf.Mtu != 0 { return } conf.Mtu = config.DefaultNetworkMtu } // IsShuttingDown tells whether the daemon is shutting down or not func (daemon *Daemon) IsShuttingDown() bool { return daemon.shutdown } // initDiscovery initializes the discovery watcher for this daemon. func (daemon *Daemon) initDiscovery(conf *config.Config) error { advertise, err := config.ParseClusterAdvertiseSettings(conf.ClusterStore, conf.ClusterAdvertise) if err != nil { if err == discovery.ErrDiscoveryDisabled { return nil } return err } conf.ClusterAdvertise = advertise discoveryWatcher, err := discovery.Init(conf.ClusterStore, conf.ClusterAdvertise, conf.ClusterOpts) if err != nil { return fmt.Errorf("discovery initialization failed (%v)", err) } daemon.discoveryWatcher = discoveryWatcher return nil } func isBridgeNetworkDisabled(conf *config.Config) bool { return conf.BridgeConfig.Iface == config.DisableNetworkBridge } func (daemon *Daemon) networkOptions(dconfig *config.Config, pg plugingetter.PluginGetter, activeSandboxes map[string]interface{}) ([]nwconfig.Option, error) { options := []nwconfig.Option{} if dconfig == nil { return options, nil } options = append(options, nwconfig.OptionExperimental(dconfig.Experimental)) options = append(options, nwconfig.OptionDataDir(dconfig.Root)) options = append(options, nwconfig.OptionExecRoot(dconfig.GetExecRoot())) dd := runconfig.DefaultDaemonNetworkMode() dn := runconfig.DefaultDaemonNetworkMode().NetworkName() options = append(options, nwconfig.OptionDefaultDriver(string(dd))) options = append(options, nwconfig.OptionDefaultNetwork(dn)) if strings.TrimSpace(dconfig.ClusterStore) != "" { kv := strings.Split(dconfig.ClusterStore, "://") if len(kv) != 2 { return nil, errors.New("kv store daemon config must be of the form KV-PROVIDER://KV-URL") } options = append(options, nwconfig.OptionKVProvider(kv[0])) options = append(options, nwconfig.OptionKVProviderURL(kv[1])) } if len(dconfig.ClusterOpts) > 0 { options = append(options, nwconfig.OptionKVOpts(dconfig.ClusterOpts)) } if daemon.discoveryWatcher != nil { options = append(options, nwconfig.OptionDiscoveryWatcher(daemon.discoveryWatcher)) } if dconfig.ClusterAdvertise != "" { options = append(options, nwconfig.OptionDiscoveryAddress(dconfig.ClusterAdvertise)) } options = append(options, nwconfig.OptionLabels(dconfig.Labels)) options = append(options, driverOptions(dconfig)...) if len(dconfig.NetworkConfig.DefaultAddressPools.Value()) > 0 { options = append(options, nwconfig.OptionDefaultAddressPoolConfig(dconfig.NetworkConfig.DefaultAddressPools.Value())) } if daemon.configStore != nil && daemon.configStore.LiveRestoreEnabled && len(activeSandboxes) != 0 { options = append(options, nwconfig.OptionActiveSandboxes(activeSandboxes)) } if pg != nil { options = append(options, nwconfig.OptionPluginGetter(pg)) } options = append(options, nwconfig.OptionNetworkControlPlaneMTU(dconfig.NetworkControlPlaneMTU)) return options, nil } // GetCluster returns the cluster func (daemon *Daemon) GetCluster() Cluster { return daemon.cluster } // SetCluster sets the cluster func (daemon *Daemon) SetCluster(cluster Cluster) { daemon.cluster = cluster } func (daemon *Daemon) pluginShutdown() { manager := daemon.pluginManager // Check for a valid manager object. In error conditions, daemon init can fail // and shutdown called, before plugin manager is initialized. if manager != nil { manager.Shutdown() } } // PluginManager returns current pluginManager associated with the daemon func (daemon *Daemon) PluginManager() *plugin.Manager { // set up before daemon to avoid this method return daemon.pluginManager } // PluginGetter returns current pluginStore associated with the daemon func (daemon *Daemon) PluginGetter() *plugin.Store { return daemon.PluginStore } // CreateDaemonRoot creates the root for the daemon func CreateDaemonRoot(config *config.Config) error { // get the canonical path to the Docker root directory var realRoot string if _, err := os.Stat(config.Root); err != nil && os.IsNotExist(err) { realRoot = config.Root } else { realRoot, err = fileutils.ReadSymlinkedDirectory(config.Root) if err != nil { return fmt.Errorf("Unable to get the full path to root (%s): %s", config.Root, err) } } idMapping, err := setupRemappedRoot(config) if err != nil { return err } return setupDaemonRoot(config, realRoot, idMapping.RootPair()) } // checkpointAndSave grabs a container lock to safely call container.CheckpointTo func (daemon *Daemon) checkpointAndSave(container *container.Container) error { container.Lock() defer container.Unlock() if err := container.CheckpointTo(daemon.containersReplica); err != nil { return fmt.Errorf("Error saving container state: %v", err) } return nil } // because the CLI sends a -1 when it wants to unset the swappiness value // we need to clear it on the server side func fixMemorySwappiness(resources *containertypes.Resources) { if resources.MemorySwappiness != nil && *resources.MemorySwappiness == -1 { resources.MemorySwappiness = nil } } // GetAttachmentStore returns current attachment store associated with the daemon func (daemon *Daemon) GetAttachmentStore() *network.AttachmentStore { return &daemon.attachmentStore } // IdentityMapping returns uid/gid mapping or a SID (in the case of Windows) for the builder func (daemon *Daemon) IdentityMapping() *idtools.IdentityMapping { return daemon.idMapping } // ImageService returns the Daemon's ImageService func (daemon *Daemon) ImageService() *images.ImageService { return daemon.imageService } // BuilderBackend returns the backend used by builder func (daemon *Daemon) BuilderBackend() builder.Backend { return struct { *Daemon *images.ImageService }{daemon, daemon.imageService} }
rvolosatovs
919f2ef7641d2139211aafe12abbbf1f81689c01
b88acf7a7a571b0fe054b8315c79caa16b0c92df
Not for this PR, but should we use the same approach for pruning, and use a `singleFlightGroup` for that as well?
thaJeztah
4,508
moby/moby
42,715
Share disk usage computation results between concurrent invocations
**- What I did** Share disk usage computation results between concurrent invocations instead of throwing an error **- How I did it** - Use `x/sync/singleflight.Group`, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method. - Extract the disk usage computation functionality for containers and images for consistency with other object types and better separation of concerns. It also fits nicely with the current design. **- How to verify it** E.g. ``` docker system df& docker system df& docker system df ``` Or: ``` curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=container'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=volume'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=build-cache'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' ``` Such invocations do not error anymore, but just return the result once computed by one of the goroutines ** description for changeling** ```markdown The `GET /system/df` endpoint can now be used concurrently. If a request is made to the endpoint while a calculation is still running, the request will receive the result of the already running calculation, once completed. Previously, an error (`a disk usage operation is already running`) would be returned in this situation. ```
null
2021-08-06 15:04:00+00:00
2021-08-10 11:51:04+00:00
daemon/daemon.go
// Package daemon exposes the functions that occur on the host server // that the Docker daemon is running. // // In implementing the various functions of the daemon, there is often // a method-specific struct for configuring the runtime behavior. package daemon // import "github.com/docker/docker/daemon" import ( "context" "fmt" "io/ioutil" "net" "net/url" "os" "path" "path/filepath" "runtime" "strings" "sync" "time" "github.com/docker/docker/pkg/fileutils" "go.etcd.io/bbolt" "google.golang.org/grpc" "google.golang.org/grpc/backoff" "github.com/containerd/containerd" "github.com/containerd/containerd/defaults" "github.com/containerd/containerd/pkg/dialer" "github.com/containerd/containerd/pkg/userns" "github.com/containerd/containerd/remotes/docker" "github.com/docker/docker/api/types" containertypes "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/swarm" "github.com/docker/docker/builder" "github.com/docker/docker/container" "github.com/docker/docker/daemon/config" "github.com/docker/docker/daemon/discovery" "github.com/docker/docker/daemon/events" "github.com/docker/docker/daemon/exec" "github.com/docker/docker/daemon/images" "github.com/docker/docker/daemon/logger" "github.com/docker/docker/daemon/network" "github.com/docker/docker/errdefs" "github.com/moby/buildkit/util/resolver" "github.com/sirupsen/logrus" // register graph drivers _ "github.com/docker/docker/daemon/graphdriver/register" "github.com/docker/docker/daemon/stats" dmetadata "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/dockerversion" "github.com/docker/docker/image" "github.com/docker/docker/layer" "github.com/docker/docker/libcontainerd" libcontainerdtypes "github.com/docker/docker/libcontainerd/types" "github.com/docker/docker/libnetwork" "github.com/docker/docker/libnetwork/cluster" nwconfig "github.com/docker/docker/libnetwork/config" "github.com/docker/docker/pkg/idtools" "github.com/docker/docker/pkg/plugingetter" "github.com/docker/docker/pkg/system" "github.com/docker/docker/pkg/truncindex" "github.com/docker/docker/plugin" pluginexec "github.com/docker/docker/plugin/executor/containerd" refstore "github.com/docker/docker/reference" "github.com/docker/docker/registry" "github.com/docker/docker/runconfig" volumesservice "github.com/docker/docker/volume/service" "github.com/moby/locker" "github.com/pkg/errors" "golang.org/x/sync/semaphore" ) // ContainersNamespace is the name of the namespace used for users containers const ( ContainersNamespace = "moby" ) var ( errSystemNotSupported = errors.New("the Docker daemon is not supported on this platform") ) // Daemon holds information about the Docker daemon. type Daemon struct { ID string repository string containers container.Store containersReplica container.ViewDB execCommands *exec.Store imageService *images.ImageService idIndex *truncindex.TruncIndex configStore *config.Config statsCollector *stats.Collector defaultLogConfig containertypes.LogConfig RegistryService registry.Service EventsService *events.Events netController libnetwork.NetworkController volumes *volumesservice.VolumesService discoveryWatcher discovery.Reloader root string seccompEnabled bool apparmorEnabled bool shutdown bool idMapping *idtools.IdentityMapping graphDriver string // TODO: move graphDriver field to an InfoService PluginStore *plugin.Store // TODO: remove pluginManager *plugin.Manager linkIndex *linkIndex containerdCli *containerd.Client containerd libcontainerdtypes.Client defaultIsolation containertypes.Isolation // Default isolation mode on Windows clusterProvider cluster.Provider cluster Cluster genericResources []swarm.GenericResource metricsPluginListener net.Listener machineMemory uint64 seccompProfile []byte seccompProfilePath string diskUsageRunning int32 pruneRunning int32 hosts map[string]bool // hosts stores the addresses the daemon is listening on startupDone chan struct{} attachmentStore network.AttachmentStore attachableNetworkLock *locker.Locker // This is used for Windows which doesn't currently support running on containerd // It stores metadata for the content store (used for manifest caching) // This needs to be closed on daemon exit mdDB *bbolt.DB } // StoreHosts stores the addresses the daemon is listening on func (daemon *Daemon) StoreHosts(hosts []string) { if daemon.hosts == nil { daemon.hosts = make(map[string]bool) } for _, h := range hosts { daemon.hosts[h] = true } } // HasExperimental returns whether the experimental features of the daemon are enabled or not func (daemon *Daemon) HasExperimental() bool { return daemon.configStore != nil && daemon.configStore.Experimental } // Features returns the features map from configStore func (daemon *Daemon) Features() *map[string]bool { return &daemon.configStore.Features } // RegistryHosts returns registry configuration in containerd resolvers format func (daemon *Daemon) RegistryHosts() docker.RegistryHosts { var ( registryKey = "docker.io" mirrors = make([]string, len(daemon.configStore.Mirrors)) m = map[string]resolver.RegistryConfig{} ) // must trim "https://" or "http://" prefix for i, v := range daemon.configStore.Mirrors { if uri, err := url.Parse(v); err == nil { v = uri.Host } mirrors[i] = v } // set mirrors for default registry m[registryKey] = resolver.RegistryConfig{Mirrors: mirrors} for _, v := range daemon.configStore.InsecureRegistries { u, err := url.Parse(v) c := resolver.RegistryConfig{} if err == nil { v = u.Host t := true if u.Scheme == "http" { c.PlainHTTP = &t } else { c.Insecure = &t } } m[v] = c } for k, v := range m { if d, err := registry.HostCertsDir(k); err == nil { v.TLSConfigDir = []string{d} m[k] = v } } certsDir := registry.CertsDir() if fis, err := ioutil.ReadDir(certsDir); err == nil { for _, fi := range fis { if _, ok := m[fi.Name()]; !ok { m[fi.Name()] = resolver.RegistryConfig{ TLSConfigDir: []string{filepath.Join(certsDir, fi.Name())}, } } } } return resolver.NewRegistryConfig(m) } func (daemon *Daemon) restore() error { var mapLock sync.Mutex containers := make(map[string]*container.Container) logrus.Info("Loading containers: start.") dir, err := ioutil.ReadDir(daemon.repository) if err != nil { return err } // parallelLimit is the maximum number of parallel startup jobs that we // allow (this is the limited used for all startup semaphores). The multipler // (128) was chosen after some fairly significant benchmarking -- don't change // it unless you've tested it significantly (this value is adjusted if // RLIMIT_NOFILE is small to avoid EMFILE). parallelLimit := adjustParallelLimit(len(dir), 128*runtime.NumCPU()) // Re-used for all parallel startup jobs. var group sync.WaitGroup sem := semaphore.NewWeighted(int64(parallelLimit)) for _, v := range dir { group.Add(1) go func(id string) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", id) c, err := daemon.load(id) if err != nil { log.WithError(err).Error("failed to load container") return } if !system.IsOSSupported(c.OS) { log.Errorf("failed to load container: %s (%q)", system.ErrNotSupportedOperatingSystem, c.OS) return } // Ignore the container if it does not support the current driver being used by the graph if (c.Driver == "" && daemon.graphDriver == "aufs") || c.Driver == daemon.graphDriver { rwlayer, err := daemon.imageService.GetLayerByID(c.ID) if err != nil { log.WithError(err).Error("failed to load container mount") return } c.RWLayer = rwlayer log.WithFields(logrus.Fields{ "running": c.IsRunning(), "paused": c.IsPaused(), }).Debug("loaded container") mapLock.Lock() containers[c.ID] = c mapLock.Unlock() } else { log.Debugf("cannot load container because it was created with another storage driver") } }(v.Name()) } group.Wait() removeContainers := make(map[string]*container.Container) restartContainers := make(map[*container.Container]chan struct{}) activeSandboxes := make(map[string]interface{}) for _, c := range containers { group.Add(1) go func(c *container.Container) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", c.ID) if err := daemon.registerName(c); err != nil { log.WithError(err).Errorf("failed to register container name: %s", c.Name) mapLock.Lock() delete(containers, c.ID) mapLock.Unlock() return } if err := daemon.Register(c); err != nil { log.WithError(err).Error("failed to register container") mapLock.Lock() delete(containers, c.ID) mapLock.Unlock() return } }(c) } group.Wait() for _, c := range containers { group.Add(1) go func(c *container.Container) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", c.ID) daemon.backportMountSpec(c) if err := daemon.checkpointAndSave(c); err != nil { log.WithError(err).Error("error saving backported mountspec to disk") } daemon.setStateCounter(c) logger := func(c *container.Container) *logrus.Entry { return log.WithFields(logrus.Fields{ "running": c.IsRunning(), "paused": c.IsPaused(), "restarting": c.IsRestarting(), }) } logger(c).Debug("restoring container") var ( err error alive bool ec uint32 exitedAt time.Time process libcontainerdtypes.Process ) alive, _, process, err = daemon.containerd.Restore(context.Background(), c.ID, c.InitializeStdio) if err != nil && !errdefs.IsNotFound(err) { logger(c).WithError(err).Error("failed to restore container with containerd") return } logger(c).Debugf("alive: %v", alive) if !alive { // If process is not nil, cleanup dead container from containerd. // If process is nil then the above `containerd.Restore` returned an errdefs.NotFoundError, // and docker's view of the container state will be updated accorrdingly via SetStopped further down. if process != nil { logger(c).Debug("cleaning up dead container process") ec, exitedAt, err = process.Delete(context.Background()) if err != nil && !errdefs.IsNotFound(err) { logger(c).WithError(err).Error("failed to delete container from containerd") return } } } else if !daemon.configStore.LiveRestoreEnabled { logger(c).Debug("shutting down container considered alive by containerd") if err := daemon.shutdownContainer(c); err != nil && !errdefs.IsNotFound(err) { log.WithError(err).Error("error shutting down container") return } c.ResetRestartManager(false) } if c.IsRunning() || c.IsPaused() { logger(c).Debug("syncing container on disk state with real state") c.RestartManager().Cancel() // manually start containers because some need to wait for swarm networking if c.IsPaused() && alive { s, err := daemon.containerd.Status(context.Background(), c.ID) if err != nil { logger(c).WithError(err).Error("failed to get container status") } else { logger(c).WithField("state", s).Info("restored container paused") switch s { case containerd.Paused, containerd.Pausing: // nothing to do case containerd.Stopped: alive = false case containerd.Unknown: log.Error("unknown status for paused container during restore") default: // running c.Lock() c.Paused = false daemon.setStateCounter(c) if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update paused container state") } c.Unlock() } } } if !alive { logger(c).Debug("setting stopped state") c.Lock() c.SetStopped(&container.ExitStatus{ExitCode: int(ec), ExitedAt: exitedAt}) daemon.Cleanup(c) if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update stopped container state") } c.Unlock() logger(c).Debug("set stopped state") } // we call Mount and then Unmount to get BaseFs of the container if err := daemon.Mount(c); err != nil { // The mount is unlikely to fail. However, in case mount fails // the container should be allowed to restore here. Some functionalities // (like docker exec -u user) might be missing but container is able to be // stopped/restarted/removed. // See #29365 for related information. // The error is only logged here. logger(c).WithError(err).Warn("failed to mount container to get BaseFs path") } else { if err := daemon.Unmount(c); err != nil { logger(c).WithError(err).Warn("failed to umount container to get BaseFs path") } } c.ResetRestartManager(false) if !c.HostConfig.NetworkMode.IsContainer() && c.IsRunning() { options, err := daemon.buildSandboxOptions(c) if err != nil { logger(c).WithError(err).Warn("failed to build sandbox option to restore container") } mapLock.Lock() activeSandboxes[c.NetworkSettings.SandboxID] = options mapLock.Unlock() } } // get list of containers we need to restart // Do not autostart containers which // has endpoints in a swarm scope // network yet since the cluster is // not initialized yet. We will start // it after the cluster is // initialized. if daemon.configStore.AutoRestart && c.ShouldRestart() && !c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore { mapLock.Lock() restartContainers[c] = make(chan struct{}) mapLock.Unlock() } else if c.HostConfig != nil && c.HostConfig.AutoRemove { mapLock.Lock() removeContainers[c.ID] = c mapLock.Unlock() } c.Lock() if c.RemovalInProgress { // We probably crashed in the middle of a removal, reset // the flag. // // We DO NOT remove the container here as we do not // know if the user had requested for either the // associated volumes, network links or both to also // be removed. So we put the container in the "dead" // state and leave further processing up to them. c.RemovalInProgress = false c.Dead = true if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update RemovalInProgress container state") } else { log.Debugf("reset RemovalInProgress state for container") } } c.Unlock() logger(c).Debug("done restoring container") }(c) } group.Wait() daemon.netController, err = daemon.initNetworkController(daemon.configStore, activeSandboxes) if err != nil { return fmt.Errorf("Error initializing network controller: %v", err) } // Now that all the containers are registered, register the links for _, c := range containers { group.Add(1) go func(c *container.Container) { _ = sem.Acquire(context.Background(), 1) if err := daemon.registerLinks(c, c.HostConfig); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to register link for container") } sem.Release(1) group.Done() }(c) } group.Wait() for c, notifier := range restartContainers { group.Add(1) go func(c *container.Container, chNotify chan struct{}) { _ = sem.Acquire(context.Background(), 1) log := logrus.WithField("container", c.ID) log.Debug("starting container") // ignore errors here as this is a best effort to wait for children to be // running before we try to start the container children := daemon.children(c) timeout := time.NewTimer(5 * time.Second) defer timeout.Stop() for _, child := range children { if notifier, exists := restartContainers[child]; exists { select { case <-notifier: case <-timeout.C: } } } // Make sure networks are available before starting daemon.waitForNetworks(c) if err := daemon.containerStart(c, "", "", true); err != nil { log.WithError(err).Error("failed to start container") } close(chNotify) sem.Release(1) group.Done() }(c, notifier) } group.Wait() for id := range removeContainers { group.Add(1) go func(cid string) { _ = sem.Acquire(context.Background(), 1) if err := daemon.ContainerRm(cid, &types.ContainerRmConfig{ForceRemove: true, RemoveVolume: true}); err != nil { logrus.WithField("container", cid).WithError(err).Error("failed to remove container") } sem.Release(1) group.Done() }(id) } group.Wait() // any containers that were started above would already have had this done, // however we need to now prepare the mountpoints for the rest of the containers as well. // This shouldn't cause any issue running on the containers that already had this run. // This must be run after any containers with a restart policy so that containerized plugins // can have a chance to be running before we try to initialize them. for _, c := range containers { // if the container has restart policy, do not // prepare the mountpoints since it has been done on restarting. // This is to speed up the daemon start when a restart container // has a volume and the volume driver is not available. if _, ok := restartContainers[c]; ok { continue } else if _, ok := removeContainers[c.ID]; ok { // container is automatically removed, skip it. continue } group.Add(1) go func(c *container.Container) { _ = sem.Acquire(context.Background(), 1) if err := daemon.prepareMountPoints(c); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to prepare mountpoints for container") } sem.Release(1) group.Done() }(c) } group.Wait() logrus.Info("Loading containers: done.") return nil } // RestartSwarmContainers restarts any autostart container which has a // swarm endpoint. func (daemon *Daemon) RestartSwarmContainers() { ctx := context.Background() // parallelLimit is the maximum number of parallel startup jobs that we // allow (this is the limited used for all startup semaphores). The multipler // (128) was chosen after some fairly significant benchmarking -- don't change // it unless you've tested it significantly (this value is adjusted if // RLIMIT_NOFILE is small to avoid EMFILE). parallelLimit := adjustParallelLimit(len(daemon.List()), 128*runtime.NumCPU()) var group sync.WaitGroup sem := semaphore.NewWeighted(int64(parallelLimit)) for _, c := range daemon.List() { if !c.IsRunning() && !c.IsPaused() { // Autostart all the containers which has a // swarm endpoint now that the cluster is // initialized. if daemon.configStore.AutoRestart && c.ShouldRestart() && c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore { group.Add(1) go func(c *container.Container) { if err := sem.Acquire(ctx, 1); err != nil { // ctx is done. group.Done() return } if err := daemon.containerStart(c, "", "", true); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to start swarm container") } sem.Release(1) group.Done() }(c) } } } group.Wait() } // waitForNetworks is used during daemon initialization when starting up containers // It ensures that all of a container's networks are available before the daemon tries to start the container. // In practice it just makes sure the discovery service is available for containers which use a network that require discovery. func (daemon *Daemon) waitForNetworks(c *container.Container) { if daemon.discoveryWatcher == nil { return } // Make sure if the container has a network that requires discovery that the discovery service is available before starting for netName := range c.NetworkSettings.Networks { // If we get `ErrNoSuchNetwork` here, we can assume that it is due to discovery not being ready // Most likely this is because the K/V store used for discovery is in a container and needs to be started if _, err := daemon.netController.NetworkByName(netName); err != nil { if _, ok := err.(libnetwork.ErrNoSuchNetwork); !ok { continue } // use a longish timeout here due to some slowdowns in libnetwork if the k/v store is on anything other than --net=host // FIXME: why is this slow??? dur := 60 * time.Second timer := time.NewTimer(dur) logrus.WithField("container", c.ID).Debugf("Container %s waiting for network to be ready", c.Name) select { case <-daemon.discoveryWatcher.ReadyCh(): case <-timer.C: } timer.Stop() return } } } func (daemon *Daemon) children(c *container.Container) map[string]*container.Container { return daemon.linkIndex.children(c) } // parents returns the names of the parent containers of the container // with the given name. func (daemon *Daemon) parents(c *container.Container) map[string]*container.Container { return daemon.linkIndex.parents(c) } func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error { fullName := path.Join(parent.Name, alias) if err := daemon.containersReplica.ReserveName(fullName, child.ID); err != nil { if err == container.ErrNameReserved { logrus.Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err) return nil } return err } daemon.linkIndex.link(parent, child, fullName) return nil } // DaemonJoinsCluster informs the daemon has joined the cluster and provides // the handler to query the cluster component func (daemon *Daemon) DaemonJoinsCluster(clusterProvider cluster.Provider) { daemon.setClusterProvider(clusterProvider) } // DaemonLeavesCluster informs the daemon has left the cluster func (daemon *Daemon) DaemonLeavesCluster() { // Daemon is in charge of removing the attachable networks with // connected containers when the node leaves the swarm daemon.clearAttachableNetworks() // We no longer need the cluster provider, stop it now so that // the network agent will stop listening to cluster events. daemon.setClusterProvider(nil) // Wait for the networking cluster agent to stop daemon.netController.AgentStopWait() // Daemon is in charge of removing the ingress network when the // node leaves the swarm. Wait for job to be done or timeout. // This is called also on graceful daemon shutdown. We need to // wait, because the ingress release has to happen before the // network controller is stopped. if done, err := daemon.ReleaseIngress(); err == nil { timeout := time.NewTimer(5 * time.Second) defer timeout.Stop() select { case <-done: case <-timeout.C: logrus.Warn("timeout while waiting for ingress network removal") } } else { logrus.Warnf("failed to initiate ingress network removal: %v", err) } daemon.attachmentStore.ClearAttachments() } // setClusterProvider sets a component for querying the current cluster state. func (daemon *Daemon) setClusterProvider(clusterProvider cluster.Provider) { daemon.clusterProvider = clusterProvider daemon.netController.SetClusterProvider(clusterProvider) daemon.attachableNetworkLock = locker.New() } // IsSwarmCompatible verifies if the current daemon // configuration is compatible with the swarm mode func (daemon *Daemon) IsSwarmCompatible() error { if daemon.configStore == nil { return nil } return daemon.configStore.IsSwarmCompatible() } // NewDaemon sets up everything for the daemon to be able to service // requests from the webserver. func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.Store) (daemon *Daemon, err error) { setDefaultMtu(config) registryService, err := registry.NewService(config.ServiceOptions) if err != nil { return nil, err } // Ensure that we have a correct root key limit for launching containers. if err := modifyRootKeyLimit(); err != nil { logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err) } // Ensure we have compatible and valid configuration options if err := verifyDaemonSettings(config); err != nil { return nil, err } // Do we have a disabled network? config.DisableBridge = isBridgeNetworkDisabled(config) // Setup the resolv.conf setupResolvConf(config) // Verify the platform is supported as a daemon if !platformSupported { return nil, errSystemNotSupported } // Validate platform-specific requirements if err := checkSystem(); err != nil { return nil, err } idMapping, err := setupRemappedRoot(config) if err != nil { return nil, err } rootIDs := idMapping.RootPair() if err := setupDaemonProcess(config); err != nil { return nil, err } // set up the tmpDir to use a canonical path tmp, err := prepareTempDir(config.Root) if err != nil { return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err) } realTmp, err := fileutils.ReadSymlinkedDirectory(tmp) if err != nil { return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err) } if isWindows { if _, err := os.Stat(realTmp); err != nil && os.IsNotExist(err) { if err := system.MkdirAll(realTmp, 0700); err != nil { return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err) } } os.Setenv("TEMP", realTmp) os.Setenv("TMP", realTmp) } else { os.Setenv("TMPDIR", realTmp) } d := &Daemon{ configStore: config, PluginStore: pluginStore, startupDone: make(chan struct{}), } // Ensure the daemon is properly shutdown if there is a failure during // initialization defer func() { if err != nil { if err := d.Shutdown(); err != nil { logrus.Error(err) } } }() if err := d.setGenericResources(config); err != nil { return nil, err } // set up SIGUSR1 handler on Unix-like systems, or a Win32 global event // on Windows to dump Go routine stacks stackDumpDir := config.Root if execRoot := config.GetExecRoot(); execRoot != "" { stackDumpDir = execRoot } d.setupDumpStackTrap(stackDumpDir) if err := d.setupSeccompProfile(); err != nil { return nil, err } // Set the default isolation mode (only applicable on Windows) if err := d.setDefaultIsolation(); err != nil { return nil, fmt.Errorf("error setting default isolation mode: %v", err) } if err := configureMaxThreads(config); err != nil { logrus.Warnf("Failed to configure golang's threads limit: %v", err) } // ensureDefaultAppArmorProfile does nothing if apparmor is disabled if err := ensureDefaultAppArmorProfile(); err != nil { logrus.Errorf(err.Error()) } daemonRepo := filepath.Join(config.Root, "containers") if err := idtools.MkdirAllAndChown(daemonRepo, 0701, idtools.CurrentIdentity()); err != nil { return nil, err } // Create the directory where we'll store the runtime scripts (i.e. in // order to support runtimeArgs) daemonRuntimes := filepath.Join(config.Root, "runtimes") if err := system.MkdirAll(daemonRuntimes, 0700); err != nil { return nil, err } if err := d.loadRuntimes(); err != nil { return nil, err } if isWindows { if err := system.MkdirAll(filepath.Join(config.Root, "credentialspecs"), 0); err != nil { return nil, err } } if isWindows { // On Windows we don't support the environment variable, or a user supplied graphdriver d.graphDriver = "windowsfilter" } else { // Unix platforms however run a single graphdriver for all containers, and it can // be set through an environment variable, a daemon start parameter, or chosen through // initialization of the layerstore through driver priority order for example. if drv := os.Getenv("DOCKER_DRIVER"); drv != "" { d.graphDriver = drv logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", drv) } else { d.graphDriver = config.GraphDriver // May still be empty. Layerstore init determines instead. } } d.RegistryService = registryService logger.RegisterPluginGetter(d.PluginStore) metricsSockPath, err := d.listenMetricsSock() if err != nil { return nil, err } registerMetricsPluginCallback(d.PluginStore, metricsSockPath) backoffConfig := backoff.DefaultConfig backoffConfig.MaxDelay = 3 * time.Second connParams := grpc.ConnectParams{ Backoff: backoffConfig, } gopts := []grpc.DialOption{ // WithBlock makes sure that the following containerd request // is reliable. // // NOTE: In one edge case with high load pressure, kernel kills // dockerd, containerd and containerd-shims caused by OOM. // When both dockerd and containerd restart, but containerd // will take time to recover all the existing containers. Before // containerd serving, dockerd will failed with gRPC error. // That bad thing is that restore action will still ignore the // any non-NotFound errors and returns running state for // already stopped container. It is unexpected behavior. And // we need to restart dockerd to make sure that anything is OK. // // It is painful. Add WithBlock can prevent the edge case. And // n common case, the containerd will be serving in shortly. // It is not harm to add WithBlock for containerd connection. grpc.WithBlock(), grpc.WithInsecure(), grpc.WithConnectParams(connParams), grpc.WithContextDialer(dialer.ContextDialer), // TODO(stevvooe): We may need to allow configuration of this on the client. grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)), grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)), } if config.ContainerdAddr != "" { d.containerdCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second)) if err != nil { return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr) } } createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) { var pluginCli *containerd.Client // Windows is not currently using containerd, keep the // client as nil if config.ContainerdAddr != "" { pluginCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdPluginNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second)) if err != nil { return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr) } } var rt types.Runtime if runtime.GOOS != "windows" { rtPtr, err := d.getRuntime(config.GetDefaultRuntimeName()) if err != nil { return nil, err } rt = *rtPtr } return pluginexec.New(ctx, getPluginExecRoot(config.Root), pluginCli, config.ContainerdPluginNamespace, m, rt) } // Plugin system initialization should happen before restore. Do not change order. d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{ Root: filepath.Join(config.Root, "plugins"), ExecRoot: getPluginExecRoot(config.Root), Store: d.PluginStore, CreateExecutor: createPluginExec, RegistryService: registryService, LiveRestoreEnabled: config.LiveRestoreEnabled, LogPluginEvent: d.LogPluginEvent, // todo: make private AuthzMiddleware: config.AuthzMiddleware, }) if err != nil { return nil, errors.Wrap(err, "couldn't create plugin manager") } if err := d.setupDefaultLogConfig(); err != nil { return nil, err } layerStore, err := layer.NewStoreFromOptions(layer.StoreOptions{ Root: config.Root, MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"), GraphDriver: d.graphDriver, GraphDriverOptions: config.GraphOptions, IDMapping: idMapping, PluginGetter: d.PluginStore, ExperimentalEnabled: config.Experimental, OS: runtime.GOOS, }) if err != nil { return nil, err } // As layerstore initialization may set the driver d.graphDriver = layerStore.DriverName() // Configure and validate the kernels security support. Note this is a Linux/FreeBSD // operation only, so it is safe to pass *just* the runtime OS graphdriver. if err := configureKernelSecuritySupport(config, d.graphDriver); err != nil { return nil, err } imageRoot := filepath.Join(config.Root, "image", d.graphDriver) ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb")) if err != nil { return nil, err } imageStore, err := image.NewImageStore(ifs, layerStore) if err != nil { return nil, err } d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d) if err != nil { return nil, err } trustKey, err := loadOrCreateTrustKey(config.TrustKeyPath) if err != nil { return nil, err } trustDir := filepath.Join(config.Root, "trust") if err := system.MkdirAll(trustDir, 0700); err != nil { return nil, err } // We have a single tag/reference store for the daemon globally. However, it's // stored under the graphdriver. On host platforms which only support a single // container OS, but multiple selectable graphdrivers, this means depending on which // graphdriver is chosen, the global reference store is under there. For // platforms which support multiple container operating systems, this is slightly // more problematic as where does the global ref store get located? Fortunately, // for Windows, which is currently the only daemon supporting multiple container // operating systems, the list of graphdrivers available isn't user configurable. // For backwards compatibility, we just put it under the windowsfilter // directory regardless. refStoreLocation := filepath.Join(imageRoot, `repositories.json`) rs, err := refstore.NewReferenceStore(refStoreLocation) if err != nil { return nil, fmt.Errorf("Couldn't create reference store repository: %s", err) } distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution")) if err != nil { return nil, err } // Discovery is only enabled when the daemon is launched with an address to advertise. When // initialized, the daemon is registered and we can store the discovery backend as it's read-only if err := d.initDiscovery(config); err != nil { return nil, err } sysInfo := d.RawSysInfo() for _, w := range sysInfo.Warnings { logrus.Warn(w) } // Check if Devices cgroup is mounted, it is hard requirement for container security, // on Linux. if runtime.GOOS == "linux" && !sysInfo.CgroupDevicesEnabled && !userns.RunningInUserNS() { return nil, errors.New("Devices cgroup isn't mounted") } d.ID = trustKey.PublicKey().KeyID() d.repository = daemonRepo d.containers = container.NewMemoryStore() if d.containersReplica, err = container.NewViewDB(); err != nil { return nil, err } d.execCommands = exec.NewStore() d.idIndex = truncindex.NewTruncIndex([]string{}) d.statsCollector = d.newStatsCollector(1 * time.Second) d.EventsService = events.New() d.root = config.Root d.idMapping = idMapping d.seccompEnabled = sysInfo.Seccomp d.apparmorEnabled = sysInfo.AppArmor d.linkIndex = newLinkIndex() imgSvcConfig := images.ImageServiceConfig{ ContainerStore: d.containers, DistributionMetadataStore: distributionMetadataStore, EventsService: d.EventsService, ImageStore: imageStore, LayerStore: layerStore, MaxConcurrentDownloads: *config.MaxConcurrentDownloads, MaxConcurrentUploads: *config.MaxConcurrentUploads, MaxDownloadAttempts: *config.MaxDownloadAttempts, ReferenceStore: rs, RegistryService: registryService, TrustKey: trustKey, ContentNamespace: config.ContainerdNamespace, } // containerd is not currently supported with Windows. // So sometimes d.containerdCli will be nil // In that case we'll create a local content store... but otherwise we'll use containerd if d.containerdCli != nil { imgSvcConfig.Leases = d.containerdCli.LeasesService() imgSvcConfig.ContentStore = d.containerdCli.ContentStore() } else { cs, lm, err := d.configureLocalContentStore() if err != nil { return nil, err } imgSvcConfig.ContentStore = cs imgSvcConfig.Leases = lm } // TODO: imageStore, distributionMetadataStore, and ReferenceStore are only // used above to run migration. They could be initialized in ImageService // if migration is called from daemon/images. layerStore might move as well. d.imageService = images.NewImageService(imgSvcConfig) go d.execCommandGC() d.containerd, err = libcontainerd.NewClient(ctx, d.containerdCli, filepath.Join(config.ExecRoot, "containerd"), config.ContainerdNamespace, d) if err != nil { return nil, err } if err := d.restore(); err != nil { return nil, err } close(d.startupDone) info := d.SystemInfo() engineInfo.WithValues( dockerversion.Version, dockerversion.GitCommit, info.Architecture, info.Driver, info.KernelVersion, info.OperatingSystem, info.OSType, info.OSVersion, info.ID, ).Set(1) engineCpus.Set(float64(info.NCPU)) engineMemory.Set(float64(info.MemTotal)) logrus.WithFields(logrus.Fields{ "version": dockerversion.Version, "commit": dockerversion.GitCommit, "graphdriver": d.graphDriver, }).Info("Docker daemon") return d, nil } // DistributionServices returns services controlling daemon storage func (daemon *Daemon) DistributionServices() images.DistributionServices { return daemon.imageService.DistributionServices() } func (daemon *Daemon) waitForStartupDone() { <-daemon.startupDone } func (daemon *Daemon) shutdownContainer(c *container.Container) error { stopTimeout := c.StopTimeout() // If container failed to exit in stopTimeout seconds of SIGTERM, then using the force if err := daemon.containerStop(c, stopTimeout); err != nil { return fmt.Errorf("Failed to stop container %s with error: %v", c.ID, err) } // Wait without timeout for the container to exit. // Ignore the result. <-c.Wait(context.Background(), container.WaitConditionNotRunning) return nil } // ShutdownTimeout returns the timeout (in seconds) before containers are forcibly // killed during shutdown. The default timeout can be configured both on the daemon // and per container, and the longest timeout will be used. A grace-period of // 5 seconds is added to the configured timeout. // // A negative (-1) timeout means "indefinitely", which means that containers // are not forcibly killed, and the daemon shuts down after all containers exit. func (daemon *Daemon) ShutdownTimeout() int { shutdownTimeout := daemon.configStore.ShutdownTimeout if shutdownTimeout < 0 { return -1 } if daemon.containers == nil { return shutdownTimeout } graceTimeout := 5 for _, c := range daemon.containers.List() { stopTimeout := c.StopTimeout() if stopTimeout < 0 { return -1 } if stopTimeout+graceTimeout > shutdownTimeout { shutdownTimeout = stopTimeout + graceTimeout } } return shutdownTimeout } // Shutdown stops the daemon. func (daemon *Daemon) Shutdown() error { daemon.shutdown = true // Keep mounts and networking running on daemon shutdown if // we are to keep containers running and restore them. if daemon.configStore.LiveRestoreEnabled && daemon.containers != nil { // check if there are any running containers, if none we should do some cleanup if ls, err := daemon.Containers(&types.ContainerListOptions{}); len(ls) != 0 || err != nil { // metrics plugins still need some cleanup daemon.cleanupMetricsPlugins() return nil } } if daemon.containers != nil { logrus.Debugf("daemon configured with a %d seconds minimum shutdown timeout", daemon.configStore.ShutdownTimeout) logrus.Debugf("start clean shutdown of all containers with a %d seconds timeout...", daemon.ShutdownTimeout()) daemon.containers.ApplyAll(func(c *container.Container) { if !c.IsRunning() { return } log := logrus.WithField("container", c.ID) log.Debug("shutting down container") if err := daemon.shutdownContainer(c); err != nil { log.WithError(err).Error("failed to shut down container") return } if mountid, err := daemon.imageService.GetLayerMountID(c.ID); err == nil { daemon.cleanupMountsByID(mountid) } log.Debugf("shut down container") }) } if daemon.volumes != nil { if err := daemon.volumes.Shutdown(); err != nil { logrus.Errorf("Error shutting down volume store: %v", err) } } if daemon.imageService != nil { daemon.imageService.Cleanup() } // If we are part of a cluster, clean up cluster's stuff if daemon.clusterProvider != nil { logrus.Debugf("start clean shutdown of cluster resources...") daemon.DaemonLeavesCluster() } daemon.cleanupMetricsPlugins() // Shutdown plugins after containers and layerstore. Don't change the order. daemon.pluginShutdown() // trigger libnetwork Stop only if it's initialized if daemon.netController != nil { daemon.netController.Stop() } if daemon.containerdCli != nil { daemon.containerdCli.Close() } if daemon.mdDB != nil { daemon.mdDB.Close() } return daemon.cleanupMounts() } // Mount sets container.BaseFS // (is it not set coming in? why is it unset?) func (daemon *Daemon) Mount(container *container.Container) error { if container.RWLayer == nil { return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil") } dir, err := container.RWLayer.Mount(container.GetMountLabel()) if err != nil { return err } logrus.WithField("container", container.ID).Debugf("container mounted via layerStore: %v", dir) if container.BaseFS != nil && container.BaseFS.Path() != dir.Path() { // The mount path reported by the graph driver should always be trusted on Windows, since the // volume path for a given mounted layer may change over time. This should only be an error // on non-Windows operating systems. if runtime.GOOS != "windows" { daemon.Unmount(container) return fmt.Errorf("Error: driver %s is returning inconsistent paths for container %s ('%s' then '%s')", daemon.imageService.GraphDriverName(), container.ID, container.BaseFS, dir) } } container.BaseFS = dir // TODO: combine these fields return nil } // Unmount unsets the container base filesystem func (daemon *Daemon) Unmount(container *container.Container) error { if container.RWLayer == nil { return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil") } if err := container.RWLayer.Unmount(); err != nil { logrus.WithField("container", container.ID).WithError(err).Error("error unmounting container") return err } return nil } // Subnets return the IPv4 and IPv6 subnets of networks that are manager by Docker. func (daemon *Daemon) Subnets() ([]net.IPNet, []net.IPNet) { var v4Subnets []net.IPNet var v6Subnets []net.IPNet managedNetworks := daemon.netController.Networks() for _, managedNetwork := range managedNetworks { v4infos, v6infos := managedNetwork.Info().IpamInfo() for _, info := range v4infos { if info.IPAMData.Pool != nil { v4Subnets = append(v4Subnets, *info.IPAMData.Pool) } } for _, info := range v6infos { if info.IPAMData.Pool != nil { v6Subnets = append(v6Subnets, *info.IPAMData.Pool) } } } return v4Subnets, v6Subnets } // prepareTempDir prepares and returns the default directory to use // for temporary files. // If it doesn't exist, it is created. If it exists, its content is removed. func prepareTempDir(rootDir string) (string, error) { var tmpDir string if tmpDir = os.Getenv("DOCKER_TMPDIR"); tmpDir == "" { tmpDir = filepath.Join(rootDir, "tmp") newName := tmpDir + "-old" if err := os.Rename(tmpDir, newName); err == nil { go func() { if err := os.RemoveAll(newName); err != nil { logrus.Warnf("failed to delete old tmp directory: %s", newName) } }() } else if !os.IsNotExist(err) { logrus.Warnf("failed to rename %s for background deletion: %s. Deleting synchronously", tmpDir, err) if err := os.RemoveAll(tmpDir); err != nil { logrus.Warnf("failed to delete old tmp directory: %s", tmpDir) } } } return tmpDir, idtools.MkdirAllAndChown(tmpDir, 0700, idtools.CurrentIdentity()) } func (daemon *Daemon) setGenericResources(conf *config.Config) error { genericResources, err := config.ParseGenericResources(conf.NodeGenericResources) if err != nil { return err } daemon.genericResources = genericResources return nil } func setDefaultMtu(conf *config.Config) { // do nothing if the config does not have the default 0 value. if conf.Mtu != 0 { return } conf.Mtu = config.DefaultNetworkMtu } // IsShuttingDown tells whether the daemon is shutting down or not func (daemon *Daemon) IsShuttingDown() bool { return daemon.shutdown } // initDiscovery initializes the discovery watcher for this daemon. func (daemon *Daemon) initDiscovery(conf *config.Config) error { advertise, err := config.ParseClusterAdvertiseSettings(conf.ClusterStore, conf.ClusterAdvertise) if err != nil { if err == discovery.ErrDiscoveryDisabled { return nil } return err } conf.ClusterAdvertise = advertise discoveryWatcher, err := discovery.Init(conf.ClusterStore, conf.ClusterAdvertise, conf.ClusterOpts) if err != nil { return fmt.Errorf("discovery initialization failed (%v)", err) } daemon.discoveryWatcher = discoveryWatcher return nil } func isBridgeNetworkDisabled(conf *config.Config) bool { return conf.BridgeConfig.Iface == config.DisableNetworkBridge } func (daemon *Daemon) networkOptions(dconfig *config.Config, pg plugingetter.PluginGetter, activeSandboxes map[string]interface{}) ([]nwconfig.Option, error) { options := []nwconfig.Option{} if dconfig == nil { return options, nil } options = append(options, nwconfig.OptionExperimental(dconfig.Experimental)) options = append(options, nwconfig.OptionDataDir(dconfig.Root)) options = append(options, nwconfig.OptionExecRoot(dconfig.GetExecRoot())) dd := runconfig.DefaultDaemonNetworkMode() dn := runconfig.DefaultDaemonNetworkMode().NetworkName() options = append(options, nwconfig.OptionDefaultDriver(string(dd))) options = append(options, nwconfig.OptionDefaultNetwork(dn)) if strings.TrimSpace(dconfig.ClusterStore) != "" { kv := strings.Split(dconfig.ClusterStore, "://") if len(kv) != 2 { return nil, errors.New("kv store daemon config must be of the form KV-PROVIDER://KV-URL") } options = append(options, nwconfig.OptionKVProvider(kv[0])) options = append(options, nwconfig.OptionKVProviderURL(kv[1])) } if len(dconfig.ClusterOpts) > 0 { options = append(options, nwconfig.OptionKVOpts(dconfig.ClusterOpts)) } if daemon.discoveryWatcher != nil { options = append(options, nwconfig.OptionDiscoveryWatcher(daemon.discoveryWatcher)) } if dconfig.ClusterAdvertise != "" { options = append(options, nwconfig.OptionDiscoveryAddress(dconfig.ClusterAdvertise)) } options = append(options, nwconfig.OptionLabels(dconfig.Labels)) options = append(options, driverOptions(dconfig)...) if len(dconfig.NetworkConfig.DefaultAddressPools.Value()) > 0 { options = append(options, nwconfig.OptionDefaultAddressPoolConfig(dconfig.NetworkConfig.DefaultAddressPools.Value())) } if daemon.configStore != nil && daemon.configStore.LiveRestoreEnabled && len(activeSandboxes) != 0 { options = append(options, nwconfig.OptionActiveSandboxes(activeSandboxes)) } if pg != nil { options = append(options, nwconfig.OptionPluginGetter(pg)) } options = append(options, nwconfig.OptionNetworkControlPlaneMTU(dconfig.NetworkControlPlaneMTU)) return options, nil } // GetCluster returns the cluster func (daemon *Daemon) GetCluster() Cluster { return daemon.cluster } // SetCluster sets the cluster func (daemon *Daemon) SetCluster(cluster Cluster) { daemon.cluster = cluster } func (daemon *Daemon) pluginShutdown() { manager := daemon.pluginManager // Check for a valid manager object. In error conditions, daemon init can fail // and shutdown called, before plugin manager is initialized. if manager != nil { manager.Shutdown() } } // PluginManager returns current pluginManager associated with the daemon func (daemon *Daemon) PluginManager() *plugin.Manager { // set up before daemon to avoid this method return daemon.pluginManager } // PluginGetter returns current pluginStore associated with the daemon func (daemon *Daemon) PluginGetter() *plugin.Store { return daemon.PluginStore } // CreateDaemonRoot creates the root for the daemon func CreateDaemonRoot(config *config.Config) error { // get the canonical path to the Docker root directory var realRoot string if _, err := os.Stat(config.Root); err != nil && os.IsNotExist(err) { realRoot = config.Root } else { realRoot, err = fileutils.ReadSymlinkedDirectory(config.Root) if err != nil { return fmt.Errorf("Unable to get the full path to root (%s): %s", config.Root, err) } } idMapping, err := setupRemappedRoot(config) if err != nil { return err } return setupDaemonRoot(config, realRoot, idMapping.RootPair()) } // checkpointAndSave grabs a container lock to safely call container.CheckpointTo func (daemon *Daemon) checkpointAndSave(container *container.Container) error { container.Lock() defer container.Unlock() if err := container.CheckpointTo(daemon.containersReplica); err != nil { return fmt.Errorf("Error saving container state: %v", err) } return nil } // because the CLI sends a -1 when it wants to unset the swappiness value // we need to clear it on the server side func fixMemorySwappiness(resources *containertypes.Resources) { if resources.MemorySwappiness != nil && *resources.MemorySwappiness == -1 { resources.MemorySwappiness = nil } } // GetAttachmentStore returns current attachment store associated with the daemon func (daemon *Daemon) GetAttachmentStore() *network.AttachmentStore { return &daemon.attachmentStore } // IdentityMapping returns uid/gid mapping or a SID (in the case of Windows) for the builder func (daemon *Daemon) IdentityMapping() *idtools.IdentityMapping { return daemon.idMapping } // ImageService returns the Daemon's ImageService func (daemon *Daemon) ImageService() *images.ImageService { return daemon.imageService } // BuilderBackend returns the backend used by builder func (daemon *Daemon) BuilderBackend() builder.Backend { return struct { *Daemon *images.ImageService }{daemon, daemon.imageService} }
// Package daemon exposes the functions that occur on the host server // that the Docker daemon is running. // // In implementing the various functions of the daemon, there is often // a method-specific struct for configuring the runtime behavior. package daemon // import "github.com/docker/docker/daemon" import ( "context" "fmt" "io/ioutil" "net" "net/url" "os" "path" "path/filepath" "runtime" "strings" "sync" "time" "github.com/containerd/containerd" "github.com/containerd/containerd/defaults" "github.com/containerd/containerd/pkg/dialer" "github.com/containerd/containerd/pkg/userns" "github.com/containerd/containerd/remotes/docker" "github.com/docker/docker/api/types" containertypes "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/swarm" "github.com/docker/docker/builder" "github.com/docker/docker/container" "github.com/docker/docker/daemon/config" "github.com/docker/docker/daemon/discovery" "github.com/docker/docker/daemon/events" "github.com/docker/docker/daemon/exec" _ "github.com/docker/docker/daemon/graphdriver/register" // register graph drivers "github.com/docker/docker/daemon/images" "github.com/docker/docker/daemon/logger" "github.com/docker/docker/daemon/network" "github.com/docker/docker/daemon/stats" dmetadata "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/dockerversion" "github.com/docker/docker/errdefs" "github.com/docker/docker/image" "github.com/docker/docker/layer" "github.com/docker/docker/libcontainerd" libcontainerdtypes "github.com/docker/docker/libcontainerd/types" "github.com/docker/docker/libnetwork" "github.com/docker/docker/libnetwork/cluster" nwconfig "github.com/docker/docker/libnetwork/config" "github.com/docker/docker/pkg/fileutils" "github.com/docker/docker/pkg/idtools" "github.com/docker/docker/pkg/plugingetter" "github.com/docker/docker/pkg/system" "github.com/docker/docker/pkg/truncindex" "github.com/docker/docker/plugin" pluginexec "github.com/docker/docker/plugin/executor/containerd" refstore "github.com/docker/docker/reference" "github.com/docker/docker/registry" "github.com/docker/docker/runconfig" volumesservice "github.com/docker/docker/volume/service" "github.com/moby/buildkit/util/resolver" "github.com/moby/locker" "github.com/pkg/errors" "github.com/sirupsen/logrus" "go.etcd.io/bbolt" "golang.org/x/sync/semaphore" "golang.org/x/sync/singleflight" "google.golang.org/grpc" "google.golang.org/grpc/backoff" ) // ContainersNamespace is the name of the namespace used for users containers const ( ContainersNamespace = "moby" ) var ( errSystemNotSupported = errors.New("the Docker daemon is not supported on this platform") ) // Daemon holds information about the Docker daemon. type Daemon struct { ID string repository string containers container.Store containersReplica container.ViewDB execCommands *exec.Store imageService *images.ImageService idIndex *truncindex.TruncIndex configStore *config.Config statsCollector *stats.Collector defaultLogConfig containertypes.LogConfig RegistryService registry.Service EventsService *events.Events netController libnetwork.NetworkController volumes *volumesservice.VolumesService discoveryWatcher discovery.Reloader root string seccompEnabled bool apparmorEnabled bool shutdown bool idMapping *idtools.IdentityMapping graphDriver string // TODO: move graphDriver field to an InfoService PluginStore *plugin.Store // TODO: remove pluginManager *plugin.Manager linkIndex *linkIndex containerdCli *containerd.Client containerd libcontainerdtypes.Client defaultIsolation containertypes.Isolation // Default isolation mode on Windows clusterProvider cluster.Provider cluster Cluster genericResources []swarm.GenericResource metricsPluginListener net.Listener machineMemory uint64 seccompProfile []byte seccompProfilePath string usage singleflight.Group pruneRunning int32 hosts map[string]bool // hosts stores the addresses the daemon is listening on startupDone chan struct{} attachmentStore network.AttachmentStore attachableNetworkLock *locker.Locker // This is used for Windows which doesn't currently support running on containerd // It stores metadata for the content store (used for manifest caching) // This needs to be closed on daemon exit mdDB *bbolt.DB } // StoreHosts stores the addresses the daemon is listening on func (daemon *Daemon) StoreHosts(hosts []string) { if daemon.hosts == nil { daemon.hosts = make(map[string]bool) } for _, h := range hosts { daemon.hosts[h] = true } } // HasExperimental returns whether the experimental features of the daemon are enabled or not func (daemon *Daemon) HasExperimental() bool { return daemon.configStore != nil && daemon.configStore.Experimental } // Features returns the features map from configStore func (daemon *Daemon) Features() *map[string]bool { return &daemon.configStore.Features } // RegistryHosts returns registry configuration in containerd resolvers format func (daemon *Daemon) RegistryHosts() docker.RegistryHosts { var ( registryKey = "docker.io" mirrors = make([]string, len(daemon.configStore.Mirrors)) m = map[string]resolver.RegistryConfig{} ) // must trim "https://" or "http://" prefix for i, v := range daemon.configStore.Mirrors { if uri, err := url.Parse(v); err == nil { v = uri.Host } mirrors[i] = v } // set mirrors for default registry m[registryKey] = resolver.RegistryConfig{Mirrors: mirrors} for _, v := range daemon.configStore.InsecureRegistries { u, err := url.Parse(v) c := resolver.RegistryConfig{} if err == nil { v = u.Host t := true if u.Scheme == "http" { c.PlainHTTP = &t } else { c.Insecure = &t } } m[v] = c } for k, v := range m { if d, err := registry.HostCertsDir(k); err == nil { v.TLSConfigDir = []string{d} m[k] = v } } certsDir := registry.CertsDir() if fis, err := ioutil.ReadDir(certsDir); err == nil { for _, fi := range fis { if _, ok := m[fi.Name()]; !ok { m[fi.Name()] = resolver.RegistryConfig{ TLSConfigDir: []string{filepath.Join(certsDir, fi.Name())}, } } } } return resolver.NewRegistryConfig(m) } func (daemon *Daemon) restore() error { var mapLock sync.Mutex containers := make(map[string]*container.Container) logrus.Info("Loading containers: start.") dir, err := ioutil.ReadDir(daemon.repository) if err != nil { return err } // parallelLimit is the maximum number of parallel startup jobs that we // allow (this is the limited used for all startup semaphores). The multipler // (128) was chosen after some fairly significant benchmarking -- don't change // it unless you've tested it significantly (this value is adjusted if // RLIMIT_NOFILE is small to avoid EMFILE). parallelLimit := adjustParallelLimit(len(dir), 128*runtime.NumCPU()) // Re-used for all parallel startup jobs. var group sync.WaitGroup sem := semaphore.NewWeighted(int64(parallelLimit)) for _, v := range dir { group.Add(1) go func(id string) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", id) c, err := daemon.load(id) if err != nil { log.WithError(err).Error("failed to load container") return } if !system.IsOSSupported(c.OS) { log.Errorf("failed to load container: %s (%q)", system.ErrNotSupportedOperatingSystem, c.OS) return } // Ignore the container if it does not support the current driver being used by the graph if (c.Driver == "" && daemon.graphDriver == "aufs") || c.Driver == daemon.graphDriver { rwlayer, err := daemon.imageService.GetLayerByID(c.ID) if err != nil { log.WithError(err).Error("failed to load container mount") return } c.RWLayer = rwlayer log.WithFields(logrus.Fields{ "running": c.IsRunning(), "paused": c.IsPaused(), }).Debug("loaded container") mapLock.Lock() containers[c.ID] = c mapLock.Unlock() } else { log.Debugf("cannot load container because it was created with another storage driver") } }(v.Name()) } group.Wait() removeContainers := make(map[string]*container.Container) restartContainers := make(map[*container.Container]chan struct{}) activeSandboxes := make(map[string]interface{}) for _, c := range containers { group.Add(1) go func(c *container.Container) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", c.ID) if err := daemon.registerName(c); err != nil { log.WithError(err).Errorf("failed to register container name: %s", c.Name) mapLock.Lock() delete(containers, c.ID) mapLock.Unlock() return } if err := daemon.Register(c); err != nil { log.WithError(err).Error("failed to register container") mapLock.Lock() delete(containers, c.ID) mapLock.Unlock() return } }(c) } group.Wait() for _, c := range containers { group.Add(1) go func(c *container.Container) { defer group.Done() _ = sem.Acquire(context.Background(), 1) defer sem.Release(1) log := logrus.WithField("container", c.ID) daemon.backportMountSpec(c) if err := daemon.checkpointAndSave(c); err != nil { log.WithError(err).Error("error saving backported mountspec to disk") } daemon.setStateCounter(c) logger := func(c *container.Container) *logrus.Entry { return log.WithFields(logrus.Fields{ "running": c.IsRunning(), "paused": c.IsPaused(), "restarting": c.IsRestarting(), }) } logger(c).Debug("restoring container") var ( err error alive bool ec uint32 exitedAt time.Time process libcontainerdtypes.Process ) alive, _, process, err = daemon.containerd.Restore(context.Background(), c.ID, c.InitializeStdio) if err != nil && !errdefs.IsNotFound(err) { logger(c).WithError(err).Error("failed to restore container with containerd") return } logger(c).Debugf("alive: %v", alive) if !alive { // If process is not nil, cleanup dead container from containerd. // If process is nil then the above `containerd.Restore` returned an errdefs.NotFoundError, // and docker's view of the container state will be updated accorrdingly via SetStopped further down. if process != nil { logger(c).Debug("cleaning up dead container process") ec, exitedAt, err = process.Delete(context.Background()) if err != nil && !errdefs.IsNotFound(err) { logger(c).WithError(err).Error("failed to delete container from containerd") return } } } else if !daemon.configStore.LiveRestoreEnabled { logger(c).Debug("shutting down container considered alive by containerd") if err := daemon.shutdownContainer(c); err != nil && !errdefs.IsNotFound(err) { log.WithError(err).Error("error shutting down container") return } c.ResetRestartManager(false) } if c.IsRunning() || c.IsPaused() { logger(c).Debug("syncing container on disk state with real state") c.RestartManager().Cancel() // manually start containers because some need to wait for swarm networking if c.IsPaused() && alive { s, err := daemon.containerd.Status(context.Background(), c.ID) if err != nil { logger(c).WithError(err).Error("failed to get container status") } else { logger(c).WithField("state", s).Info("restored container paused") switch s { case containerd.Paused, containerd.Pausing: // nothing to do case containerd.Stopped: alive = false case containerd.Unknown: log.Error("unknown status for paused container during restore") default: // running c.Lock() c.Paused = false daemon.setStateCounter(c) if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update paused container state") } c.Unlock() } } } if !alive { logger(c).Debug("setting stopped state") c.Lock() c.SetStopped(&container.ExitStatus{ExitCode: int(ec), ExitedAt: exitedAt}) daemon.Cleanup(c) if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update stopped container state") } c.Unlock() logger(c).Debug("set stopped state") } // we call Mount and then Unmount to get BaseFs of the container if err := daemon.Mount(c); err != nil { // The mount is unlikely to fail. However, in case mount fails // the container should be allowed to restore here. Some functionalities // (like docker exec -u user) might be missing but container is able to be // stopped/restarted/removed. // See #29365 for related information. // The error is only logged here. logger(c).WithError(err).Warn("failed to mount container to get BaseFs path") } else { if err := daemon.Unmount(c); err != nil { logger(c).WithError(err).Warn("failed to umount container to get BaseFs path") } } c.ResetRestartManager(false) if !c.HostConfig.NetworkMode.IsContainer() && c.IsRunning() { options, err := daemon.buildSandboxOptions(c) if err != nil { logger(c).WithError(err).Warn("failed to build sandbox option to restore container") } mapLock.Lock() activeSandboxes[c.NetworkSettings.SandboxID] = options mapLock.Unlock() } } // get list of containers we need to restart // Do not autostart containers which // has endpoints in a swarm scope // network yet since the cluster is // not initialized yet. We will start // it after the cluster is // initialized. if daemon.configStore.AutoRestart && c.ShouldRestart() && !c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore { mapLock.Lock() restartContainers[c] = make(chan struct{}) mapLock.Unlock() } else if c.HostConfig != nil && c.HostConfig.AutoRemove { mapLock.Lock() removeContainers[c.ID] = c mapLock.Unlock() } c.Lock() if c.RemovalInProgress { // We probably crashed in the middle of a removal, reset // the flag. // // We DO NOT remove the container here as we do not // know if the user had requested for either the // associated volumes, network links or both to also // be removed. So we put the container in the "dead" // state and leave further processing up to them. c.RemovalInProgress = false c.Dead = true if err := c.CheckpointTo(daemon.containersReplica); err != nil { log.WithError(err).Error("failed to update RemovalInProgress container state") } else { log.Debugf("reset RemovalInProgress state for container") } } c.Unlock() logger(c).Debug("done restoring container") }(c) } group.Wait() daemon.netController, err = daemon.initNetworkController(daemon.configStore, activeSandboxes) if err != nil { return fmt.Errorf("Error initializing network controller: %v", err) } // Now that all the containers are registered, register the links for _, c := range containers { group.Add(1) go func(c *container.Container) { _ = sem.Acquire(context.Background(), 1) if err := daemon.registerLinks(c, c.HostConfig); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to register link for container") } sem.Release(1) group.Done() }(c) } group.Wait() for c, notifier := range restartContainers { group.Add(1) go func(c *container.Container, chNotify chan struct{}) { _ = sem.Acquire(context.Background(), 1) log := logrus.WithField("container", c.ID) log.Debug("starting container") // ignore errors here as this is a best effort to wait for children to be // running before we try to start the container children := daemon.children(c) timeout := time.NewTimer(5 * time.Second) defer timeout.Stop() for _, child := range children { if notifier, exists := restartContainers[child]; exists { select { case <-notifier: case <-timeout.C: } } } // Make sure networks are available before starting daemon.waitForNetworks(c) if err := daemon.containerStart(c, "", "", true); err != nil { log.WithError(err).Error("failed to start container") } close(chNotify) sem.Release(1) group.Done() }(c, notifier) } group.Wait() for id := range removeContainers { group.Add(1) go func(cid string) { _ = sem.Acquire(context.Background(), 1) if err := daemon.ContainerRm(cid, &types.ContainerRmConfig{ForceRemove: true, RemoveVolume: true}); err != nil { logrus.WithField("container", cid).WithError(err).Error("failed to remove container") } sem.Release(1) group.Done() }(id) } group.Wait() // any containers that were started above would already have had this done, // however we need to now prepare the mountpoints for the rest of the containers as well. // This shouldn't cause any issue running on the containers that already had this run. // This must be run after any containers with a restart policy so that containerized plugins // can have a chance to be running before we try to initialize them. for _, c := range containers { // if the container has restart policy, do not // prepare the mountpoints since it has been done on restarting. // This is to speed up the daemon start when a restart container // has a volume and the volume driver is not available. if _, ok := restartContainers[c]; ok { continue } else if _, ok := removeContainers[c.ID]; ok { // container is automatically removed, skip it. continue } group.Add(1) go func(c *container.Container) { _ = sem.Acquire(context.Background(), 1) if err := daemon.prepareMountPoints(c); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to prepare mountpoints for container") } sem.Release(1) group.Done() }(c) } group.Wait() logrus.Info("Loading containers: done.") return nil } // RestartSwarmContainers restarts any autostart container which has a // swarm endpoint. func (daemon *Daemon) RestartSwarmContainers() { ctx := context.Background() // parallelLimit is the maximum number of parallel startup jobs that we // allow (this is the limited used for all startup semaphores). The multipler // (128) was chosen after some fairly significant benchmarking -- don't change // it unless you've tested it significantly (this value is adjusted if // RLIMIT_NOFILE is small to avoid EMFILE). parallelLimit := adjustParallelLimit(len(daemon.List()), 128*runtime.NumCPU()) var group sync.WaitGroup sem := semaphore.NewWeighted(int64(parallelLimit)) for _, c := range daemon.List() { if !c.IsRunning() && !c.IsPaused() { // Autostart all the containers which has a // swarm endpoint now that the cluster is // initialized. if daemon.configStore.AutoRestart && c.ShouldRestart() && c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore { group.Add(1) go func(c *container.Container) { if err := sem.Acquire(ctx, 1); err != nil { // ctx is done. group.Done() return } if err := daemon.containerStart(c, "", "", true); err != nil { logrus.WithField("container", c.ID).WithError(err).Error("failed to start swarm container") } sem.Release(1) group.Done() }(c) } } } group.Wait() } // waitForNetworks is used during daemon initialization when starting up containers // It ensures that all of a container's networks are available before the daemon tries to start the container. // In practice it just makes sure the discovery service is available for containers which use a network that require discovery. func (daemon *Daemon) waitForNetworks(c *container.Container) { if daemon.discoveryWatcher == nil { return } // Make sure if the container has a network that requires discovery that the discovery service is available before starting for netName := range c.NetworkSettings.Networks { // If we get `ErrNoSuchNetwork` here, we can assume that it is due to discovery not being ready // Most likely this is because the K/V store used for discovery is in a container and needs to be started if _, err := daemon.netController.NetworkByName(netName); err != nil { if _, ok := err.(libnetwork.ErrNoSuchNetwork); !ok { continue } // use a longish timeout here due to some slowdowns in libnetwork if the k/v store is on anything other than --net=host // FIXME: why is this slow??? dur := 60 * time.Second timer := time.NewTimer(dur) logrus.WithField("container", c.ID).Debugf("Container %s waiting for network to be ready", c.Name) select { case <-daemon.discoveryWatcher.ReadyCh(): case <-timer.C: } timer.Stop() return } } } func (daemon *Daemon) children(c *container.Container) map[string]*container.Container { return daemon.linkIndex.children(c) } // parents returns the names of the parent containers of the container // with the given name. func (daemon *Daemon) parents(c *container.Container) map[string]*container.Container { return daemon.linkIndex.parents(c) } func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error { fullName := path.Join(parent.Name, alias) if err := daemon.containersReplica.ReserveName(fullName, child.ID); err != nil { if err == container.ErrNameReserved { logrus.Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err) return nil } return err } daemon.linkIndex.link(parent, child, fullName) return nil } // DaemonJoinsCluster informs the daemon has joined the cluster and provides // the handler to query the cluster component func (daemon *Daemon) DaemonJoinsCluster(clusterProvider cluster.Provider) { daemon.setClusterProvider(clusterProvider) } // DaemonLeavesCluster informs the daemon has left the cluster func (daemon *Daemon) DaemonLeavesCluster() { // Daemon is in charge of removing the attachable networks with // connected containers when the node leaves the swarm daemon.clearAttachableNetworks() // We no longer need the cluster provider, stop it now so that // the network agent will stop listening to cluster events. daemon.setClusterProvider(nil) // Wait for the networking cluster agent to stop daemon.netController.AgentStopWait() // Daemon is in charge of removing the ingress network when the // node leaves the swarm. Wait for job to be done or timeout. // This is called also on graceful daemon shutdown. We need to // wait, because the ingress release has to happen before the // network controller is stopped. if done, err := daemon.ReleaseIngress(); err == nil { timeout := time.NewTimer(5 * time.Second) defer timeout.Stop() select { case <-done: case <-timeout.C: logrus.Warn("timeout while waiting for ingress network removal") } } else { logrus.Warnf("failed to initiate ingress network removal: %v", err) } daemon.attachmentStore.ClearAttachments() } // setClusterProvider sets a component for querying the current cluster state. func (daemon *Daemon) setClusterProvider(clusterProvider cluster.Provider) { daemon.clusterProvider = clusterProvider daemon.netController.SetClusterProvider(clusterProvider) daemon.attachableNetworkLock = locker.New() } // IsSwarmCompatible verifies if the current daemon // configuration is compatible with the swarm mode func (daemon *Daemon) IsSwarmCompatible() error { if daemon.configStore == nil { return nil } return daemon.configStore.IsSwarmCompatible() } // NewDaemon sets up everything for the daemon to be able to service // requests from the webserver. func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.Store) (daemon *Daemon, err error) { setDefaultMtu(config) registryService, err := registry.NewService(config.ServiceOptions) if err != nil { return nil, err } // Ensure that we have a correct root key limit for launching containers. if err := modifyRootKeyLimit(); err != nil { logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err) } // Ensure we have compatible and valid configuration options if err := verifyDaemonSettings(config); err != nil { return nil, err } // Do we have a disabled network? config.DisableBridge = isBridgeNetworkDisabled(config) // Setup the resolv.conf setupResolvConf(config) // Verify the platform is supported as a daemon if !platformSupported { return nil, errSystemNotSupported } // Validate platform-specific requirements if err := checkSystem(); err != nil { return nil, err } idMapping, err := setupRemappedRoot(config) if err != nil { return nil, err } rootIDs := idMapping.RootPair() if err := setupDaemonProcess(config); err != nil { return nil, err } // set up the tmpDir to use a canonical path tmp, err := prepareTempDir(config.Root) if err != nil { return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err) } realTmp, err := fileutils.ReadSymlinkedDirectory(tmp) if err != nil { return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err) } if isWindows { if _, err := os.Stat(realTmp); err != nil && os.IsNotExist(err) { if err := system.MkdirAll(realTmp, 0700); err != nil { return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err) } } os.Setenv("TEMP", realTmp) os.Setenv("TMP", realTmp) } else { os.Setenv("TMPDIR", realTmp) } d := &Daemon{ configStore: config, PluginStore: pluginStore, startupDone: make(chan struct{}), } // Ensure the daemon is properly shutdown if there is a failure during // initialization defer func() { if err != nil { if err := d.Shutdown(); err != nil { logrus.Error(err) } } }() if err := d.setGenericResources(config); err != nil { return nil, err } // set up SIGUSR1 handler on Unix-like systems, or a Win32 global event // on Windows to dump Go routine stacks stackDumpDir := config.Root if execRoot := config.GetExecRoot(); execRoot != "" { stackDumpDir = execRoot } d.setupDumpStackTrap(stackDumpDir) if err := d.setupSeccompProfile(); err != nil { return nil, err } // Set the default isolation mode (only applicable on Windows) if err := d.setDefaultIsolation(); err != nil { return nil, fmt.Errorf("error setting default isolation mode: %v", err) } if err := configureMaxThreads(config); err != nil { logrus.Warnf("Failed to configure golang's threads limit: %v", err) } // ensureDefaultAppArmorProfile does nothing if apparmor is disabled if err := ensureDefaultAppArmorProfile(); err != nil { logrus.Errorf(err.Error()) } daemonRepo := filepath.Join(config.Root, "containers") if err := idtools.MkdirAllAndChown(daemonRepo, 0701, idtools.CurrentIdentity()); err != nil { return nil, err } // Create the directory where we'll store the runtime scripts (i.e. in // order to support runtimeArgs) daemonRuntimes := filepath.Join(config.Root, "runtimes") if err := system.MkdirAll(daemonRuntimes, 0700); err != nil { return nil, err } if err := d.loadRuntimes(); err != nil { return nil, err } if isWindows { if err := system.MkdirAll(filepath.Join(config.Root, "credentialspecs"), 0); err != nil { return nil, err } } if isWindows { // On Windows we don't support the environment variable, or a user supplied graphdriver d.graphDriver = "windowsfilter" } else { // Unix platforms however run a single graphdriver for all containers, and it can // be set through an environment variable, a daemon start parameter, or chosen through // initialization of the layerstore through driver priority order for example. if drv := os.Getenv("DOCKER_DRIVER"); drv != "" { d.graphDriver = drv logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", drv) } else { d.graphDriver = config.GraphDriver // May still be empty. Layerstore init determines instead. } } d.RegistryService = registryService logger.RegisterPluginGetter(d.PluginStore) metricsSockPath, err := d.listenMetricsSock() if err != nil { return nil, err } registerMetricsPluginCallback(d.PluginStore, metricsSockPath) backoffConfig := backoff.DefaultConfig backoffConfig.MaxDelay = 3 * time.Second connParams := grpc.ConnectParams{ Backoff: backoffConfig, } gopts := []grpc.DialOption{ // WithBlock makes sure that the following containerd request // is reliable. // // NOTE: In one edge case with high load pressure, kernel kills // dockerd, containerd and containerd-shims caused by OOM. // When both dockerd and containerd restart, but containerd // will take time to recover all the existing containers. Before // containerd serving, dockerd will failed with gRPC error. // That bad thing is that restore action will still ignore the // any non-NotFound errors and returns running state for // already stopped container. It is unexpected behavior. And // we need to restart dockerd to make sure that anything is OK. // // It is painful. Add WithBlock can prevent the edge case. And // n common case, the containerd will be serving in shortly. // It is not harm to add WithBlock for containerd connection. grpc.WithBlock(), grpc.WithInsecure(), grpc.WithConnectParams(connParams), grpc.WithContextDialer(dialer.ContextDialer), // TODO(stevvooe): We may need to allow configuration of this on the client. grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)), grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)), } if config.ContainerdAddr != "" { d.containerdCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second)) if err != nil { return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr) } } createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) { var pluginCli *containerd.Client // Windows is not currently using containerd, keep the // client as nil if config.ContainerdAddr != "" { pluginCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdPluginNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second)) if err != nil { return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr) } } var rt types.Runtime if runtime.GOOS != "windows" { rtPtr, err := d.getRuntime(config.GetDefaultRuntimeName()) if err != nil { return nil, err } rt = *rtPtr } return pluginexec.New(ctx, getPluginExecRoot(config.Root), pluginCli, config.ContainerdPluginNamespace, m, rt) } // Plugin system initialization should happen before restore. Do not change order. d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{ Root: filepath.Join(config.Root, "plugins"), ExecRoot: getPluginExecRoot(config.Root), Store: d.PluginStore, CreateExecutor: createPluginExec, RegistryService: registryService, LiveRestoreEnabled: config.LiveRestoreEnabled, LogPluginEvent: d.LogPluginEvent, // todo: make private AuthzMiddleware: config.AuthzMiddleware, }) if err != nil { return nil, errors.Wrap(err, "couldn't create plugin manager") } if err := d.setupDefaultLogConfig(); err != nil { return nil, err } layerStore, err := layer.NewStoreFromOptions(layer.StoreOptions{ Root: config.Root, MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"), GraphDriver: d.graphDriver, GraphDriverOptions: config.GraphOptions, IDMapping: idMapping, PluginGetter: d.PluginStore, ExperimentalEnabled: config.Experimental, OS: runtime.GOOS, }) if err != nil { return nil, err } // As layerstore initialization may set the driver d.graphDriver = layerStore.DriverName() // Configure and validate the kernels security support. Note this is a Linux/FreeBSD // operation only, so it is safe to pass *just* the runtime OS graphdriver. if err := configureKernelSecuritySupport(config, d.graphDriver); err != nil { return nil, err } imageRoot := filepath.Join(config.Root, "image", d.graphDriver) ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb")) if err != nil { return nil, err } imageStore, err := image.NewImageStore(ifs, layerStore) if err != nil { return nil, err } d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d) if err != nil { return nil, err } trustKey, err := loadOrCreateTrustKey(config.TrustKeyPath) if err != nil { return nil, err } trustDir := filepath.Join(config.Root, "trust") if err := system.MkdirAll(trustDir, 0700); err != nil { return nil, err } // We have a single tag/reference store for the daemon globally. However, it's // stored under the graphdriver. On host platforms which only support a single // container OS, but multiple selectable graphdrivers, this means depending on which // graphdriver is chosen, the global reference store is under there. For // platforms which support multiple container operating systems, this is slightly // more problematic as where does the global ref store get located? Fortunately, // for Windows, which is currently the only daemon supporting multiple container // operating systems, the list of graphdrivers available isn't user configurable. // For backwards compatibility, we just put it under the windowsfilter // directory regardless. refStoreLocation := filepath.Join(imageRoot, `repositories.json`) rs, err := refstore.NewReferenceStore(refStoreLocation) if err != nil { return nil, fmt.Errorf("Couldn't create reference store repository: %s", err) } distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution")) if err != nil { return nil, err } // Discovery is only enabled when the daemon is launched with an address to advertise. When // initialized, the daemon is registered and we can store the discovery backend as it's read-only if err := d.initDiscovery(config); err != nil { return nil, err } sysInfo := d.RawSysInfo() for _, w := range sysInfo.Warnings { logrus.Warn(w) } // Check if Devices cgroup is mounted, it is hard requirement for container security, // on Linux. if runtime.GOOS == "linux" && !sysInfo.CgroupDevicesEnabled && !userns.RunningInUserNS() { return nil, errors.New("Devices cgroup isn't mounted") } d.ID = trustKey.PublicKey().KeyID() d.repository = daemonRepo d.containers = container.NewMemoryStore() if d.containersReplica, err = container.NewViewDB(); err != nil { return nil, err } d.execCommands = exec.NewStore() d.idIndex = truncindex.NewTruncIndex([]string{}) d.statsCollector = d.newStatsCollector(1 * time.Second) d.EventsService = events.New() d.root = config.Root d.idMapping = idMapping d.seccompEnabled = sysInfo.Seccomp d.apparmorEnabled = sysInfo.AppArmor d.linkIndex = newLinkIndex() imgSvcConfig := images.ImageServiceConfig{ ContainerStore: d.containers, DistributionMetadataStore: distributionMetadataStore, EventsService: d.EventsService, ImageStore: imageStore, LayerStore: layerStore, MaxConcurrentDownloads: *config.MaxConcurrentDownloads, MaxConcurrentUploads: *config.MaxConcurrentUploads, MaxDownloadAttempts: *config.MaxDownloadAttempts, ReferenceStore: rs, RegistryService: registryService, TrustKey: trustKey, ContentNamespace: config.ContainerdNamespace, } // containerd is not currently supported with Windows. // So sometimes d.containerdCli will be nil // In that case we'll create a local content store... but otherwise we'll use containerd if d.containerdCli != nil { imgSvcConfig.Leases = d.containerdCli.LeasesService() imgSvcConfig.ContentStore = d.containerdCli.ContentStore() } else { cs, lm, err := d.configureLocalContentStore() if err != nil { return nil, err } imgSvcConfig.ContentStore = cs imgSvcConfig.Leases = lm } // TODO: imageStore, distributionMetadataStore, and ReferenceStore are only // used above to run migration. They could be initialized in ImageService // if migration is called from daemon/images. layerStore might move as well. d.imageService = images.NewImageService(imgSvcConfig) go d.execCommandGC() d.containerd, err = libcontainerd.NewClient(ctx, d.containerdCli, filepath.Join(config.ExecRoot, "containerd"), config.ContainerdNamespace, d) if err != nil { return nil, err } if err := d.restore(); err != nil { return nil, err } close(d.startupDone) info := d.SystemInfo() engineInfo.WithValues( dockerversion.Version, dockerversion.GitCommit, info.Architecture, info.Driver, info.KernelVersion, info.OperatingSystem, info.OSType, info.OSVersion, info.ID, ).Set(1) engineCpus.Set(float64(info.NCPU)) engineMemory.Set(float64(info.MemTotal)) logrus.WithFields(logrus.Fields{ "version": dockerversion.Version, "commit": dockerversion.GitCommit, "graphdriver": d.graphDriver, }).Info("Docker daemon") return d, nil } // DistributionServices returns services controlling daemon storage func (daemon *Daemon) DistributionServices() images.DistributionServices { return daemon.imageService.DistributionServices() } func (daemon *Daemon) waitForStartupDone() { <-daemon.startupDone } func (daemon *Daemon) shutdownContainer(c *container.Container) error { stopTimeout := c.StopTimeout() // If container failed to exit in stopTimeout seconds of SIGTERM, then using the force if err := daemon.containerStop(c, stopTimeout); err != nil { return fmt.Errorf("Failed to stop container %s with error: %v", c.ID, err) } // Wait without timeout for the container to exit. // Ignore the result. <-c.Wait(context.Background(), container.WaitConditionNotRunning) return nil } // ShutdownTimeout returns the timeout (in seconds) before containers are forcibly // killed during shutdown. The default timeout can be configured both on the daemon // and per container, and the longest timeout will be used. A grace-period of // 5 seconds is added to the configured timeout. // // A negative (-1) timeout means "indefinitely", which means that containers // are not forcibly killed, and the daemon shuts down after all containers exit. func (daemon *Daemon) ShutdownTimeout() int { shutdownTimeout := daemon.configStore.ShutdownTimeout if shutdownTimeout < 0 { return -1 } if daemon.containers == nil { return shutdownTimeout } graceTimeout := 5 for _, c := range daemon.containers.List() { stopTimeout := c.StopTimeout() if stopTimeout < 0 { return -1 } if stopTimeout+graceTimeout > shutdownTimeout { shutdownTimeout = stopTimeout + graceTimeout } } return shutdownTimeout } // Shutdown stops the daemon. func (daemon *Daemon) Shutdown() error { daemon.shutdown = true // Keep mounts and networking running on daemon shutdown if // we are to keep containers running and restore them. if daemon.configStore.LiveRestoreEnabled && daemon.containers != nil { // check if there are any running containers, if none we should do some cleanup if ls, err := daemon.Containers(&types.ContainerListOptions{}); len(ls) != 0 || err != nil { // metrics plugins still need some cleanup daemon.cleanupMetricsPlugins() return nil } } if daemon.containers != nil { logrus.Debugf("daemon configured with a %d seconds minimum shutdown timeout", daemon.configStore.ShutdownTimeout) logrus.Debugf("start clean shutdown of all containers with a %d seconds timeout...", daemon.ShutdownTimeout()) daemon.containers.ApplyAll(func(c *container.Container) { if !c.IsRunning() { return } log := logrus.WithField("container", c.ID) log.Debug("shutting down container") if err := daemon.shutdownContainer(c); err != nil { log.WithError(err).Error("failed to shut down container") return } if mountid, err := daemon.imageService.GetLayerMountID(c.ID); err == nil { daemon.cleanupMountsByID(mountid) } log.Debugf("shut down container") }) } if daemon.volumes != nil { if err := daemon.volumes.Shutdown(); err != nil { logrus.Errorf("Error shutting down volume store: %v", err) } } if daemon.imageService != nil { daemon.imageService.Cleanup() } // If we are part of a cluster, clean up cluster's stuff if daemon.clusterProvider != nil { logrus.Debugf("start clean shutdown of cluster resources...") daemon.DaemonLeavesCluster() } daemon.cleanupMetricsPlugins() // Shutdown plugins after containers and layerstore. Don't change the order. daemon.pluginShutdown() // trigger libnetwork Stop only if it's initialized if daemon.netController != nil { daemon.netController.Stop() } if daemon.containerdCli != nil { daemon.containerdCli.Close() } if daemon.mdDB != nil { daemon.mdDB.Close() } return daemon.cleanupMounts() } // Mount sets container.BaseFS // (is it not set coming in? why is it unset?) func (daemon *Daemon) Mount(container *container.Container) error { if container.RWLayer == nil { return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil") } dir, err := container.RWLayer.Mount(container.GetMountLabel()) if err != nil { return err } logrus.WithField("container", container.ID).Debugf("container mounted via layerStore: %v", dir) if container.BaseFS != nil && container.BaseFS.Path() != dir.Path() { // The mount path reported by the graph driver should always be trusted on Windows, since the // volume path for a given mounted layer may change over time. This should only be an error // on non-Windows operating systems. if runtime.GOOS != "windows" { daemon.Unmount(container) return fmt.Errorf("Error: driver %s is returning inconsistent paths for container %s ('%s' then '%s')", daemon.imageService.GraphDriverName(), container.ID, container.BaseFS, dir) } } container.BaseFS = dir // TODO: combine these fields return nil } // Unmount unsets the container base filesystem func (daemon *Daemon) Unmount(container *container.Container) error { if container.RWLayer == nil { return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil") } if err := container.RWLayer.Unmount(); err != nil { logrus.WithField("container", container.ID).WithError(err).Error("error unmounting container") return err } return nil } // Subnets return the IPv4 and IPv6 subnets of networks that are manager by Docker. func (daemon *Daemon) Subnets() ([]net.IPNet, []net.IPNet) { var v4Subnets []net.IPNet var v6Subnets []net.IPNet managedNetworks := daemon.netController.Networks() for _, managedNetwork := range managedNetworks { v4infos, v6infos := managedNetwork.Info().IpamInfo() for _, info := range v4infos { if info.IPAMData.Pool != nil { v4Subnets = append(v4Subnets, *info.IPAMData.Pool) } } for _, info := range v6infos { if info.IPAMData.Pool != nil { v6Subnets = append(v6Subnets, *info.IPAMData.Pool) } } } return v4Subnets, v6Subnets } // prepareTempDir prepares and returns the default directory to use // for temporary files. // If it doesn't exist, it is created. If it exists, its content is removed. func prepareTempDir(rootDir string) (string, error) { var tmpDir string if tmpDir = os.Getenv("DOCKER_TMPDIR"); tmpDir == "" { tmpDir = filepath.Join(rootDir, "tmp") newName := tmpDir + "-old" if err := os.Rename(tmpDir, newName); err == nil { go func() { if err := os.RemoveAll(newName); err != nil { logrus.Warnf("failed to delete old tmp directory: %s", newName) } }() } else if !os.IsNotExist(err) { logrus.Warnf("failed to rename %s for background deletion: %s. Deleting synchronously", tmpDir, err) if err := os.RemoveAll(tmpDir); err != nil { logrus.Warnf("failed to delete old tmp directory: %s", tmpDir) } } } return tmpDir, idtools.MkdirAllAndChown(tmpDir, 0700, idtools.CurrentIdentity()) } func (daemon *Daemon) setGenericResources(conf *config.Config) error { genericResources, err := config.ParseGenericResources(conf.NodeGenericResources) if err != nil { return err } daemon.genericResources = genericResources return nil } func setDefaultMtu(conf *config.Config) { // do nothing if the config does not have the default 0 value. if conf.Mtu != 0 { return } conf.Mtu = config.DefaultNetworkMtu } // IsShuttingDown tells whether the daemon is shutting down or not func (daemon *Daemon) IsShuttingDown() bool { return daemon.shutdown } // initDiscovery initializes the discovery watcher for this daemon. func (daemon *Daemon) initDiscovery(conf *config.Config) error { advertise, err := config.ParseClusterAdvertiseSettings(conf.ClusterStore, conf.ClusterAdvertise) if err != nil { if err == discovery.ErrDiscoveryDisabled { return nil } return err } conf.ClusterAdvertise = advertise discoveryWatcher, err := discovery.Init(conf.ClusterStore, conf.ClusterAdvertise, conf.ClusterOpts) if err != nil { return fmt.Errorf("discovery initialization failed (%v)", err) } daemon.discoveryWatcher = discoveryWatcher return nil } func isBridgeNetworkDisabled(conf *config.Config) bool { return conf.BridgeConfig.Iface == config.DisableNetworkBridge } func (daemon *Daemon) networkOptions(dconfig *config.Config, pg plugingetter.PluginGetter, activeSandboxes map[string]interface{}) ([]nwconfig.Option, error) { options := []nwconfig.Option{} if dconfig == nil { return options, nil } options = append(options, nwconfig.OptionExperimental(dconfig.Experimental)) options = append(options, nwconfig.OptionDataDir(dconfig.Root)) options = append(options, nwconfig.OptionExecRoot(dconfig.GetExecRoot())) dd := runconfig.DefaultDaemonNetworkMode() dn := runconfig.DefaultDaemonNetworkMode().NetworkName() options = append(options, nwconfig.OptionDefaultDriver(string(dd))) options = append(options, nwconfig.OptionDefaultNetwork(dn)) if strings.TrimSpace(dconfig.ClusterStore) != "" { kv := strings.Split(dconfig.ClusterStore, "://") if len(kv) != 2 { return nil, errors.New("kv store daemon config must be of the form KV-PROVIDER://KV-URL") } options = append(options, nwconfig.OptionKVProvider(kv[0])) options = append(options, nwconfig.OptionKVProviderURL(kv[1])) } if len(dconfig.ClusterOpts) > 0 { options = append(options, nwconfig.OptionKVOpts(dconfig.ClusterOpts)) } if daemon.discoveryWatcher != nil { options = append(options, nwconfig.OptionDiscoveryWatcher(daemon.discoveryWatcher)) } if dconfig.ClusterAdvertise != "" { options = append(options, nwconfig.OptionDiscoveryAddress(dconfig.ClusterAdvertise)) } options = append(options, nwconfig.OptionLabels(dconfig.Labels)) options = append(options, driverOptions(dconfig)...) if len(dconfig.NetworkConfig.DefaultAddressPools.Value()) > 0 { options = append(options, nwconfig.OptionDefaultAddressPoolConfig(dconfig.NetworkConfig.DefaultAddressPools.Value())) } if daemon.configStore != nil && daemon.configStore.LiveRestoreEnabled && len(activeSandboxes) != 0 { options = append(options, nwconfig.OptionActiveSandboxes(activeSandboxes)) } if pg != nil { options = append(options, nwconfig.OptionPluginGetter(pg)) } options = append(options, nwconfig.OptionNetworkControlPlaneMTU(dconfig.NetworkControlPlaneMTU)) return options, nil } // GetCluster returns the cluster func (daemon *Daemon) GetCluster() Cluster { return daemon.cluster } // SetCluster sets the cluster func (daemon *Daemon) SetCluster(cluster Cluster) { daemon.cluster = cluster } func (daemon *Daemon) pluginShutdown() { manager := daemon.pluginManager // Check for a valid manager object. In error conditions, daemon init can fail // and shutdown called, before plugin manager is initialized. if manager != nil { manager.Shutdown() } } // PluginManager returns current pluginManager associated with the daemon func (daemon *Daemon) PluginManager() *plugin.Manager { // set up before daemon to avoid this method return daemon.pluginManager } // PluginGetter returns current pluginStore associated with the daemon func (daemon *Daemon) PluginGetter() *plugin.Store { return daemon.PluginStore } // CreateDaemonRoot creates the root for the daemon func CreateDaemonRoot(config *config.Config) error { // get the canonical path to the Docker root directory var realRoot string if _, err := os.Stat(config.Root); err != nil && os.IsNotExist(err) { realRoot = config.Root } else { realRoot, err = fileutils.ReadSymlinkedDirectory(config.Root) if err != nil { return fmt.Errorf("Unable to get the full path to root (%s): %s", config.Root, err) } } idMapping, err := setupRemappedRoot(config) if err != nil { return err } return setupDaemonRoot(config, realRoot, idMapping.RootPair()) } // checkpointAndSave grabs a container lock to safely call container.CheckpointTo func (daemon *Daemon) checkpointAndSave(container *container.Container) error { container.Lock() defer container.Unlock() if err := container.CheckpointTo(daemon.containersReplica); err != nil { return fmt.Errorf("Error saving container state: %v", err) } return nil } // because the CLI sends a -1 when it wants to unset the swappiness value // we need to clear it on the server side func fixMemorySwappiness(resources *containertypes.Resources) { if resources.MemorySwappiness != nil && *resources.MemorySwappiness == -1 { resources.MemorySwappiness = nil } } // GetAttachmentStore returns current attachment store associated with the daemon func (daemon *Daemon) GetAttachmentStore() *network.AttachmentStore { return &daemon.attachmentStore } // IdentityMapping returns uid/gid mapping or a SID (in the case of Windows) for the builder func (daemon *Daemon) IdentityMapping() *idtools.IdentityMapping { return daemon.idMapping } // ImageService returns the Daemon's ImageService func (daemon *Daemon) ImageService() *images.ImageService { return daemon.imageService } // BuilderBackend returns the backend used by builder func (daemon *Daemon) BuilderBackend() builder.Backend { return struct { *Daemon *images.ImageService }{daemon, daemon.imageService} }
rvolosatovs
919f2ef7641d2139211aafe12abbbf1f81689c01
b88acf7a7a571b0fe054b8315c79caa16b0c92df
Yes!
rvolosatovs
4,509
moby/moby
42,715
Share disk usage computation results between concurrent invocations
**- What I did** Share disk usage computation results between concurrent invocations instead of throwing an error **- How I did it** - Use `x/sync/singleflight.Group`, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method. - Extract the disk usage computation functionality for containers and images for consistency with other object types and better separation of concerns. It also fits nicely with the current design. **- How to verify it** E.g. ``` docker system df& docker system df& docker system df ``` Or: ``` curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=container'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=volume'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=build-cache'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' ``` Such invocations do not error anymore, but just return the result once computed by one of the goroutines ** description for changeling** ```markdown The `GET /system/df` endpoint can now be used concurrently. If a request is made to the endpoint while a calculation is still running, the request will receive the result of the already running calculation, once completed. Previously, an error (`a disk usage operation is already running`) would be returned in this situation. ```
null
2021-08-06 15:04:00+00:00
2021-08-10 11:51:04+00:00
daemon/disk_usage.go
package daemon // import "github.com/docker/docker/daemon" import ( "context" "fmt" "sync/atomic" "github.com/docker/docker/api/server/router/system" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/filters" "golang.org/x/sync/errgroup" ) // SystemDiskUsage returns information about the daemon data disk usage func (daemon *Daemon) SystemDiskUsage(ctx context.Context, opts system.DiskUsageOptions) (*types.DiskUsage, error) { if !atomic.CompareAndSwapInt32(&daemon.diskUsageRunning, 0, 1) { return nil, fmt.Errorf("a disk usage operation is already running") } defer atomic.StoreInt32(&daemon.diskUsageRunning, 0) eg, ctx := errgroup.WithContext(ctx) var containers []*types.Container if opts.Containers { eg.Go(func() error { var err error // Retrieve container list containers, err = daemon.Containers(&types.ContainerListOptions{ Size: true, All: true, }) if err != nil { return fmt.Errorf("failed to retrieve container list: %v", err) } return nil }) } var ( images []*types.ImageSummary layersSize int64 ) if opts.Images { eg.Go(func() error { var err error // Get all top images with extra attributes images, err = daemon.imageService.Images(ctx, types.ImageListOptions{ Filters: filters.NewArgs(), SharedSize: true, ContainerCount: true, }) if err != nil { return fmt.Errorf("failed to retrieve image list: %v", err) } return nil }) eg.Go(func() error { var err error layersSize, err = daemon.imageService.LayerDiskUsage(ctx) return err }) } var volumes []*types.Volume if opts.Volumes { eg.Go(func() error { var err error volumes, err = daemon.volumes.LocalVolumesSize(ctx) return err }) } if err := eg.Wait(); err != nil { return nil, err } return &types.DiskUsage{ LayersSize: layersSize, Containers: containers, Volumes: volumes, Images: images, }, nil }
package daemon // import "github.com/docker/docker/daemon" import ( "context" "fmt" "github.com/docker/docker/api/server/router/system" "github.com/docker/docker/api/types" "golang.org/x/sync/errgroup" ) // ContainerDiskUsage returns information about container data disk usage. func (daemon *Daemon) ContainerDiskUsage(ctx context.Context) ([]*types.Container, error) { ch := daemon.usage.DoChan("ContainerDiskUsage", func() (interface{}, error) { // Retrieve container list containers, err := daemon.Containers(&types.ContainerListOptions{ Size: true, All: true, }) if err != nil { return nil, fmt.Errorf("failed to retrieve container list: %v", err) } return containers, nil }) select { case <-ctx.Done(): return nil, ctx.Err() case res := <-ch: if res.Err != nil { return nil, res.Err } return res.Val.([]*types.Container), nil } } // SystemDiskUsage returns information about the daemon data disk usage. // Callers must not mutate contents of the returned fields. func (daemon *Daemon) SystemDiskUsage(ctx context.Context, opts system.DiskUsageOptions) (*types.DiskUsage, error) { eg, ctx := errgroup.WithContext(ctx) var containers []*types.Container if opts.Containers { eg.Go(func() error { var err error containers, err = daemon.ContainerDiskUsage(ctx) return err }) } var ( images []*types.ImageSummary layersSize int64 ) if opts.Images { eg.Go(func() error { var err error images, err = daemon.imageService.ImageDiskUsage(ctx) return err }) eg.Go(func() error { var err error layersSize, err = daemon.imageService.LayerDiskUsage(ctx) return err }) } var volumes []*types.Volume if opts.Volumes { eg.Go(func() error { var err error volumes, err = daemon.volumes.LocalVolumesSize(ctx) return err }) } if err := eg.Wait(); err != nil { return nil, err } return &types.DiskUsage{ LayersSize: layersSize, Containers: containers, Volumes: volumes, Images: images, }, nil }
rvolosatovs
919f2ef7641d2139211aafe12abbbf1f81689c01
b88acf7a7a571b0fe054b8315c79caa16b0c92df
Tempted to say for these to just inline `containerDiskUsage()` here. It's not used in any other location, and not sure if abstracting away what it does helps much.
thaJeztah
4,510
moby/moby
42,715
Share disk usage computation results between concurrent invocations
**- What I did** Share disk usage computation results between concurrent invocations instead of throwing an error **- How I did it** - Use `x/sync/singleflight.Group`, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method. - Extract the disk usage computation functionality for containers and images for consistency with other object types and better separation of concerns. It also fits nicely with the current design. **- How to verify it** E.g. ``` docker system df& docker system df& docker system df ``` Or: ``` curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=container'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=volume'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=build-cache'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' ``` Such invocations do not error anymore, but just return the result once computed by one of the goroutines ** description for changeling** ```markdown The `GET /system/df` endpoint can now be used concurrently. If a request is made to the endpoint while a calculation is still running, the request will receive the result of the already running calculation, once completed. Previously, an error (`a disk usage operation is already running`) would be returned in this situation. ```
null
2021-08-06 15:04:00+00:00
2021-08-10 11:51:04+00:00
daemon/disk_usage.go
package daemon // import "github.com/docker/docker/daemon" import ( "context" "fmt" "sync/atomic" "github.com/docker/docker/api/server/router/system" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/filters" "golang.org/x/sync/errgroup" ) // SystemDiskUsage returns information about the daemon data disk usage func (daemon *Daemon) SystemDiskUsage(ctx context.Context, opts system.DiskUsageOptions) (*types.DiskUsage, error) { if !atomic.CompareAndSwapInt32(&daemon.diskUsageRunning, 0, 1) { return nil, fmt.Errorf("a disk usage operation is already running") } defer atomic.StoreInt32(&daemon.diskUsageRunning, 0) eg, ctx := errgroup.WithContext(ctx) var containers []*types.Container if opts.Containers { eg.Go(func() error { var err error // Retrieve container list containers, err = daemon.Containers(&types.ContainerListOptions{ Size: true, All: true, }) if err != nil { return fmt.Errorf("failed to retrieve container list: %v", err) } return nil }) } var ( images []*types.ImageSummary layersSize int64 ) if opts.Images { eg.Go(func() error { var err error // Get all top images with extra attributes images, err = daemon.imageService.Images(ctx, types.ImageListOptions{ Filters: filters.NewArgs(), SharedSize: true, ContainerCount: true, }) if err != nil { return fmt.Errorf("failed to retrieve image list: %v", err) } return nil }) eg.Go(func() error { var err error layersSize, err = daemon.imageService.LayerDiskUsage(ctx) return err }) } var volumes []*types.Volume if opts.Volumes { eg.Go(func() error { var err error volumes, err = daemon.volumes.LocalVolumesSize(ctx) return err }) } if err := eg.Wait(); err != nil { return nil, err } return &types.DiskUsage{ LayersSize: layersSize, Containers: containers, Volumes: volumes, Images: images, }, nil }
package daemon // import "github.com/docker/docker/daemon" import ( "context" "fmt" "github.com/docker/docker/api/server/router/system" "github.com/docker/docker/api/types" "golang.org/x/sync/errgroup" ) // ContainerDiskUsage returns information about container data disk usage. func (daemon *Daemon) ContainerDiskUsage(ctx context.Context) ([]*types.Container, error) { ch := daemon.usage.DoChan("ContainerDiskUsage", func() (interface{}, error) { // Retrieve container list containers, err := daemon.Containers(&types.ContainerListOptions{ Size: true, All: true, }) if err != nil { return nil, fmt.Errorf("failed to retrieve container list: %v", err) } return containers, nil }) select { case <-ctx.Done(): return nil, ctx.Err() case res := <-ch: if res.Err != nil { return nil, res.Err } return res.Val.([]*types.Container), nil } } // SystemDiskUsage returns information about the daemon data disk usage. // Callers must not mutate contents of the returned fields. func (daemon *Daemon) SystemDiskUsage(ctx context.Context, opts system.DiskUsageOptions) (*types.DiskUsage, error) { eg, ctx := errgroup.WithContext(ctx) var containers []*types.Container if opts.Containers { eg.Go(func() error { var err error containers, err = daemon.ContainerDiskUsage(ctx) return err }) } var ( images []*types.ImageSummary layersSize int64 ) if opts.Images { eg.Go(func() error { var err error images, err = daemon.imageService.ImageDiskUsage(ctx) return err }) eg.Go(func() error { var err error layersSize, err = daemon.imageService.LayerDiskUsage(ctx) return err }) } var volumes []*types.Volume if opts.Volumes { eg.Go(func() error { var err error volumes, err = daemon.volumes.LocalVolumesSize(ctx) return err }) } if err := eg.Wait(); err != nil { return nil, err } return &types.DiskUsage{ LayersSize: layersSize, Containers: containers, Volumes: volumes, Images: images, }, nil }
rvolosatovs
919f2ef7641d2139211aafe12abbbf1f81689c01
b88acf7a7a571b0fe054b8315c79caa16b0c92df
That is an artifact from previous iteration, inlining makes sense indeed with current approach
rvolosatovs
4,511
moby/moby
42,715
Share disk usage computation results between concurrent invocations
**- What I did** Share disk usage computation results between concurrent invocations instead of throwing an error **- How I did it** - Use `x/sync/singleflight.Group`, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method. - Extract the disk usage computation functionality for containers and images for consistency with other object types and better separation of concerns. It also fits nicely with the current design. **- How to verify it** E.g. ``` docker system df& docker system df& docker system df ``` Or: ``` curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=container'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=volume'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=build-cache'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' ``` Such invocations do not error anymore, but just return the result once computed by one of the goroutines ** description for changeling** ```markdown The `GET /system/df` endpoint can now be used concurrently. If a request is made to the endpoint while a calculation is still running, the request will receive the result of the already running calculation, once completed. Previously, an error (`a disk usage operation is already running`) would be returned in this situation. ```
null
2021-08-06 15:04:00+00:00
2021-08-10 11:51:04+00:00
daemon/images/service.go
package images // import "github.com/docker/docker/daemon/images" import ( "context" "os" "github.com/containerd/containerd/content" "github.com/containerd/containerd/leases" "github.com/docker/docker/container" daemonevents "github.com/docker/docker/daemon/events" "github.com/docker/docker/distribution" "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/distribution/xfer" "github.com/docker/docker/image" "github.com/docker/docker/layer" dockerreference "github.com/docker/docker/reference" "github.com/docker/docker/registry" "github.com/docker/libtrust" digest "github.com/opencontainers/go-digest" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) type containerStore interface { // used by image delete First(container.StoreFilter) *container.Container // used by image prune, and image list List() []*container.Container // TODO: remove, only used for CommitBuildStep Get(string) *container.Container } // ImageServiceConfig is the configuration used to create a new ImageService type ImageServiceConfig struct { ContainerStore containerStore DistributionMetadataStore metadata.Store EventsService *daemonevents.Events ImageStore image.Store LayerStore layer.Store MaxConcurrentDownloads int MaxConcurrentUploads int MaxDownloadAttempts int ReferenceStore dockerreference.Store RegistryService registry.Service TrustKey libtrust.PrivateKey ContentStore content.Store Leases leases.Manager ContentNamespace string } // NewImageService returns a new ImageService from a configuration func NewImageService(config ImageServiceConfig) *ImageService { logrus.Debugf("Max Concurrent Downloads: %d", config.MaxConcurrentDownloads) logrus.Debugf("Max Concurrent Uploads: %d", config.MaxConcurrentUploads) logrus.Debugf("Max Download Attempts: %d", config.MaxDownloadAttempts) return &ImageService{ containers: config.ContainerStore, distributionMetadataStore: config.DistributionMetadataStore, downloadManager: xfer.NewLayerDownloadManager(config.LayerStore, config.MaxConcurrentDownloads, xfer.WithMaxDownloadAttempts(config.MaxDownloadAttempts)), eventsService: config.EventsService, imageStore: &imageStoreWithLease{Store: config.ImageStore, leases: config.Leases, ns: config.ContentNamespace}, layerStore: config.LayerStore, referenceStore: config.ReferenceStore, registryService: config.RegistryService, trustKey: config.TrustKey, uploadManager: xfer.NewLayerUploadManager(config.MaxConcurrentUploads), leases: config.Leases, content: config.ContentStore, contentNamespace: config.ContentNamespace, } } // ImageService provides a backend for image management type ImageService struct { containers containerStore distributionMetadataStore metadata.Store downloadManager *xfer.LayerDownloadManager eventsService *daemonevents.Events imageStore image.Store layerStore layer.Store pruneRunning int32 referenceStore dockerreference.Store registryService registry.Service trustKey libtrust.PrivateKey uploadManager *xfer.LayerUploadManager leases leases.Manager content content.Store contentNamespace string } // DistributionServices provides daemon image storage services type DistributionServices struct { DownloadManager distribution.RootFSDownloadManager V2MetadataService metadata.V2MetadataService LayerStore layer.Store ImageStore image.Store ReferenceStore dockerreference.Store } // DistributionServices return services controlling daemon image storage func (i *ImageService) DistributionServices() DistributionServices { return DistributionServices{ DownloadManager: i.downloadManager, V2MetadataService: metadata.NewV2MetadataService(i.distributionMetadataStore), LayerStore: i.layerStore, ImageStore: i.imageStore, ReferenceStore: i.referenceStore, } } // CountImages returns the number of images stored by ImageService // called from info.go func (i *ImageService) CountImages() int { return i.imageStore.Len() } // Children returns the children image.IDs for a parent image. // called from list.go to filter containers // TODO: refactor to expose an ancestry for image.ID? func (i *ImageService) Children(id image.ID) []image.ID { return i.imageStore.Children(id) } // CreateLayer creates a filesystem layer for a container. // called from create.go // TODO: accept an opt struct instead of container? func (i *ImageService) CreateLayer(container *container.Container, initFunc layer.MountInit) (layer.RWLayer, error) { var layerID layer.ChainID if container.ImageID != "" { img, err := i.imageStore.Get(container.ImageID) if err != nil { return nil, err } layerID = img.RootFS.ChainID() } rwLayerOpts := &layer.CreateRWLayerOpts{ MountLabel: container.MountLabel, InitFunc: initFunc, StorageOpt: container.HostConfig.StorageOpt, } // Indexing by OS is safe here as validation of OS has already been performed in create() (the only // caller), and guaranteed non-nil return i.layerStore.CreateRWLayer(container.ID, layerID, rwLayerOpts) } // GetLayerByID returns a layer by ID // called from daemon.go Daemon.restore(), and Daemon.containerExport() func (i *ImageService) GetLayerByID(cid string) (layer.RWLayer, error) { return i.layerStore.GetRWLayer(cid) } // LayerStoreStatus returns the status for each layer store // called from info.go func (i *ImageService) LayerStoreStatus() [][2]string { return i.layerStore.DriverStatus() } // GetLayerMountID returns the mount ID for a layer // called from daemon.go Daemon.Shutdown(), and Daemon.Cleanup() (cleanup is actually continerCleanup) // TODO: needs to be refactored to Unmount (see callers), or removed and replaced with GetLayerByID func (i *ImageService) GetLayerMountID(cid string) (string, error) { return i.layerStore.GetMountID(cid) } // Cleanup resources before the process is shutdown. // called from daemon.go Daemon.Shutdown() func (i *ImageService) Cleanup() { if err := i.layerStore.Cleanup(); err != nil { logrus.Errorf("Error during layer Store.Cleanup(): %v", err) } } // GraphDriverName returns the name of the graph drvier // moved from Daemon.GraphDriverName, used by: // - newContainer // - to report an error in Daemon.Mount(container) func (i *ImageService) GraphDriverName() string { return i.layerStore.DriverName() } // ReleaseLayer releases a layer allowing it to be removed // called from delete.go Daemon.cleanupContainer(), and Daemon.containerExport() func (i *ImageService) ReleaseLayer(rwlayer layer.RWLayer, containerOS string) error { metadata, err := i.layerStore.ReleaseRWLayer(rwlayer) layer.LogReleaseMetadata(metadata) if err != nil && !errors.Is(err, layer.ErrMountDoesNotExist) && !errors.Is(err, os.ErrNotExist) { return errors.Wrapf(err, "driver %q failed to remove root filesystem", i.layerStore.DriverName()) } return nil } // LayerDiskUsage returns the number of bytes used by layer stores // called from disk_usage.go func (i *ImageService) LayerDiskUsage(ctx context.Context) (int64, error) { var allLayersSize int64 layerRefs := i.getLayerRefs() allLayers := i.layerStore.Map() for _, l := range allLayers { select { case <-ctx.Done(): return allLayersSize, ctx.Err() default: size, err := l.DiffSize() if err == nil { if _, ok := layerRefs[l.ChainID()]; ok { allLayersSize += size } } else { logrus.Warnf("failed to get diff size for layer %v", l.ChainID()) } } } return allLayersSize, nil } func (i *ImageService) getLayerRefs() map[layer.ChainID]int { tmpImages := i.imageStore.Map() layerRefs := map[layer.ChainID]int{} for id, img := range tmpImages { dgst := digest.Digest(id) if len(i.referenceStore.References(dgst)) == 0 && len(i.imageStore.Children(id)) != 0 { continue } rootFS := *img.RootFS rootFS.DiffIDs = nil for _, id := range img.RootFS.DiffIDs { rootFS.Append(id) chid := rootFS.ChainID() layerRefs[chid]++ } } return layerRefs } // UpdateConfig values // // called from reload.go func (i *ImageService) UpdateConfig(maxDownloads, maxUploads *int) { if i.downloadManager != nil && maxDownloads != nil { i.downloadManager.SetConcurrency(*maxDownloads) } if i.uploadManager != nil && maxUploads != nil { i.uploadManager.SetConcurrency(*maxUploads) } }
package images // import "github.com/docker/docker/daemon/images" import ( "context" "fmt" "os" "github.com/containerd/containerd/content" "github.com/containerd/containerd/leases" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/container" daemonevents "github.com/docker/docker/daemon/events" "github.com/docker/docker/distribution" "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/distribution/xfer" "github.com/docker/docker/image" "github.com/docker/docker/layer" dockerreference "github.com/docker/docker/reference" "github.com/docker/docker/registry" "github.com/docker/libtrust" digest "github.com/opencontainers/go-digest" "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/singleflight" ) type containerStore interface { // used by image delete First(container.StoreFilter) *container.Container // used by image prune, and image list List() []*container.Container // TODO: remove, only used for CommitBuildStep Get(string) *container.Container } // ImageServiceConfig is the configuration used to create a new ImageService type ImageServiceConfig struct { ContainerStore containerStore DistributionMetadataStore metadata.Store EventsService *daemonevents.Events ImageStore image.Store LayerStore layer.Store MaxConcurrentDownloads int MaxConcurrentUploads int MaxDownloadAttempts int ReferenceStore dockerreference.Store RegistryService registry.Service TrustKey libtrust.PrivateKey ContentStore content.Store Leases leases.Manager ContentNamespace string } // NewImageService returns a new ImageService from a configuration func NewImageService(config ImageServiceConfig) *ImageService { logrus.Debugf("Max Concurrent Downloads: %d", config.MaxConcurrentDownloads) logrus.Debugf("Max Concurrent Uploads: %d", config.MaxConcurrentUploads) logrus.Debugf("Max Download Attempts: %d", config.MaxDownloadAttempts) return &ImageService{ containers: config.ContainerStore, distributionMetadataStore: config.DistributionMetadataStore, downloadManager: xfer.NewLayerDownloadManager(config.LayerStore, config.MaxConcurrentDownloads, xfer.WithMaxDownloadAttempts(config.MaxDownloadAttempts)), eventsService: config.EventsService, imageStore: &imageStoreWithLease{Store: config.ImageStore, leases: config.Leases, ns: config.ContentNamespace}, layerStore: config.LayerStore, referenceStore: config.ReferenceStore, registryService: config.RegistryService, trustKey: config.TrustKey, uploadManager: xfer.NewLayerUploadManager(config.MaxConcurrentUploads), leases: config.Leases, content: config.ContentStore, contentNamespace: config.ContentNamespace, } } // ImageService provides a backend for image management type ImageService struct { containers containerStore distributionMetadataStore metadata.Store downloadManager *xfer.LayerDownloadManager eventsService *daemonevents.Events imageStore image.Store layerStore layer.Store pruneRunning int32 referenceStore dockerreference.Store registryService registry.Service trustKey libtrust.PrivateKey uploadManager *xfer.LayerUploadManager leases leases.Manager content content.Store contentNamespace string usage singleflight.Group } // DistributionServices provides daemon image storage services type DistributionServices struct { DownloadManager distribution.RootFSDownloadManager V2MetadataService metadata.V2MetadataService LayerStore layer.Store ImageStore image.Store ReferenceStore dockerreference.Store } // DistributionServices return services controlling daemon image storage func (i *ImageService) DistributionServices() DistributionServices { return DistributionServices{ DownloadManager: i.downloadManager, V2MetadataService: metadata.NewV2MetadataService(i.distributionMetadataStore), LayerStore: i.layerStore, ImageStore: i.imageStore, ReferenceStore: i.referenceStore, } } // CountImages returns the number of images stored by ImageService // called from info.go func (i *ImageService) CountImages() int { return i.imageStore.Len() } // Children returns the children image.IDs for a parent image. // called from list.go to filter containers // TODO: refactor to expose an ancestry for image.ID? func (i *ImageService) Children(id image.ID) []image.ID { return i.imageStore.Children(id) } // CreateLayer creates a filesystem layer for a container. // called from create.go // TODO: accept an opt struct instead of container? func (i *ImageService) CreateLayer(container *container.Container, initFunc layer.MountInit) (layer.RWLayer, error) { var layerID layer.ChainID if container.ImageID != "" { img, err := i.imageStore.Get(container.ImageID) if err != nil { return nil, err } layerID = img.RootFS.ChainID() } rwLayerOpts := &layer.CreateRWLayerOpts{ MountLabel: container.MountLabel, InitFunc: initFunc, StorageOpt: container.HostConfig.StorageOpt, } // Indexing by OS is safe here as validation of OS has already been performed in create() (the only // caller), and guaranteed non-nil return i.layerStore.CreateRWLayer(container.ID, layerID, rwLayerOpts) } // GetLayerByID returns a layer by ID // called from daemon.go Daemon.restore(), and Daemon.containerExport() func (i *ImageService) GetLayerByID(cid string) (layer.RWLayer, error) { return i.layerStore.GetRWLayer(cid) } // LayerStoreStatus returns the status for each layer store // called from info.go func (i *ImageService) LayerStoreStatus() [][2]string { return i.layerStore.DriverStatus() } // GetLayerMountID returns the mount ID for a layer // called from daemon.go Daemon.Shutdown(), and Daemon.Cleanup() (cleanup is actually continerCleanup) // TODO: needs to be refactored to Unmount (see callers), or removed and replaced with GetLayerByID func (i *ImageService) GetLayerMountID(cid string) (string, error) { return i.layerStore.GetMountID(cid) } // Cleanup resources before the process is shutdown. // called from daemon.go Daemon.Shutdown() func (i *ImageService) Cleanup() { if err := i.layerStore.Cleanup(); err != nil { logrus.Errorf("Error during layer Store.Cleanup(): %v", err) } } // GraphDriverName returns the name of the graph drvier // moved from Daemon.GraphDriverName, used by: // - newContainer // - to report an error in Daemon.Mount(container) func (i *ImageService) GraphDriverName() string { return i.layerStore.DriverName() } // ReleaseLayer releases a layer allowing it to be removed // called from delete.go Daemon.cleanupContainer(), and Daemon.containerExport() func (i *ImageService) ReleaseLayer(rwlayer layer.RWLayer, containerOS string) error { metadata, err := i.layerStore.ReleaseRWLayer(rwlayer) layer.LogReleaseMetadata(metadata) if err != nil && !errors.Is(err, layer.ErrMountDoesNotExist) && !errors.Is(err, os.ErrNotExist) { return errors.Wrapf(err, "driver %q failed to remove root filesystem", i.layerStore.DriverName()) } return nil } // LayerDiskUsage returns the number of bytes used by layer stores // called from disk_usage.go func (i *ImageService) LayerDiskUsage(ctx context.Context) (int64, error) { ch := i.usage.DoChan("LayerDiskUsage", func() (interface{}, error) { var allLayersSize int64 layerRefs := i.getLayerRefs() allLayers := i.layerStore.Map() for _, l := range allLayers { select { case <-ctx.Done(): return allLayersSize, ctx.Err() default: size, err := l.DiffSize() if err == nil { if _, ok := layerRefs[l.ChainID()]; ok { allLayersSize += size } } else { logrus.Warnf("failed to get diff size for layer %v", l.ChainID()) } } } return allLayersSize, nil }) select { case <-ctx.Done(): return 0, ctx.Err() case res := <-ch: if res.Err != nil { return 0, res.Err } return res.Val.(int64), nil } } func (i *ImageService) getLayerRefs() map[layer.ChainID]int { tmpImages := i.imageStore.Map() layerRefs := map[layer.ChainID]int{} for id, img := range tmpImages { dgst := digest.Digest(id) if len(i.referenceStore.References(dgst)) == 0 && len(i.imageStore.Children(id)) != 0 { continue } rootFS := *img.RootFS rootFS.DiffIDs = nil for _, id := range img.RootFS.DiffIDs { rootFS.Append(id) chid := rootFS.ChainID() layerRefs[chid]++ } } return layerRefs } // ImageDiskUsage returns information about image data disk usage. func (i *ImageService) ImageDiskUsage(ctx context.Context) ([]*types.ImageSummary, error) { ch := i.usage.DoChan("ImageDiskUsage", func() (interface{}, error) { // Get all top images with extra attributes images, err := i.Images(ctx, types.ImageListOptions{ Filters: filters.NewArgs(), SharedSize: true, ContainerCount: true, }) if err != nil { return nil, fmt.Errorf("failed to retrieve image list: %v", err) } return images, nil }) select { case <-ctx.Done(): return nil, ctx.Err() case res := <-ch: if res.Err != nil { return nil, res.Err } return res.Val.([]*types.ImageSummary), nil } } // UpdateConfig values // // called from reload.go func (i *ImageService) UpdateConfig(maxDownloads, maxUploads *int) { if i.downloadManager != nil && maxDownloads != nil { i.downloadManager.SetConcurrency(*maxDownloads) } if i.uploadManager != nil && maxUploads != nil { i.uploadManager.SetConcurrency(*maxUploads) } }
rvolosatovs
919f2ef7641d2139211aafe12abbbf1f81689c01
b88acf7a7a571b0fe054b8315c79caa16b0c92df
Same here (consider inlining)
thaJeztah
4,512
moby/moby
42,715
Share disk usage computation results between concurrent invocations
**- What I did** Share disk usage computation results between concurrent invocations instead of throwing an error **- How I did it** - Use `x/sync/singleflight.Group`, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method. - Extract the disk usage computation functionality for containers and images for consistency with other object types and better separation of concerns. It also fits nicely with the current design. **- How to verify it** E.g. ``` docker system df& docker system df& docker system df ``` Or: ``` curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=container'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=volume'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=build-cache'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' ``` Such invocations do not error anymore, but just return the result once computed by one of the goroutines ** description for changeling** ```markdown The `GET /system/df` endpoint can now be used concurrently. If a request is made to the endpoint while a calculation is still running, the request will receive the result of the already running calculation, once completed. Previously, an error (`a disk usage operation is already running`) would be returned in this situation. ```
null
2021-08-06 15:04:00+00:00
2021-08-10 11:51:04+00:00
daemon/images/service.go
package images // import "github.com/docker/docker/daemon/images" import ( "context" "os" "github.com/containerd/containerd/content" "github.com/containerd/containerd/leases" "github.com/docker/docker/container" daemonevents "github.com/docker/docker/daemon/events" "github.com/docker/docker/distribution" "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/distribution/xfer" "github.com/docker/docker/image" "github.com/docker/docker/layer" dockerreference "github.com/docker/docker/reference" "github.com/docker/docker/registry" "github.com/docker/libtrust" digest "github.com/opencontainers/go-digest" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) type containerStore interface { // used by image delete First(container.StoreFilter) *container.Container // used by image prune, and image list List() []*container.Container // TODO: remove, only used for CommitBuildStep Get(string) *container.Container } // ImageServiceConfig is the configuration used to create a new ImageService type ImageServiceConfig struct { ContainerStore containerStore DistributionMetadataStore metadata.Store EventsService *daemonevents.Events ImageStore image.Store LayerStore layer.Store MaxConcurrentDownloads int MaxConcurrentUploads int MaxDownloadAttempts int ReferenceStore dockerreference.Store RegistryService registry.Service TrustKey libtrust.PrivateKey ContentStore content.Store Leases leases.Manager ContentNamespace string } // NewImageService returns a new ImageService from a configuration func NewImageService(config ImageServiceConfig) *ImageService { logrus.Debugf("Max Concurrent Downloads: %d", config.MaxConcurrentDownloads) logrus.Debugf("Max Concurrent Uploads: %d", config.MaxConcurrentUploads) logrus.Debugf("Max Download Attempts: %d", config.MaxDownloadAttempts) return &ImageService{ containers: config.ContainerStore, distributionMetadataStore: config.DistributionMetadataStore, downloadManager: xfer.NewLayerDownloadManager(config.LayerStore, config.MaxConcurrentDownloads, xfer.WithMaxDownloadAttempts(config.MaxDownloadAttempts)), eventsService: config.EventsService, imageStore: &imageStoreWithLease{Store: config.ImageStore, leases: config.Leases, ns: config.ContentNamespace}, layerStore: config.LayerStore, referenceStore: config.ReferenceStore, registryService: config.RegistryService, trustKey: config.TrustKey, uploadManager: xfer.NewLayerUploadManager(config.MaxConcurrentUploads), leases: config.Leases, content: config.ContentStore, contentNamespace: config.ContentNamespace, } } // ImageService provides a backend for image management type ImageService struct { containers containerStore distributionMetadataStore metadata.Store downloadManager *xfer.LayerDownloadManager eventsService *daemonevents.Events imageStore image.Store layerStore layer.Store pruneRunning int32 referenceStore dockerreference.Store registryService registry.Service trustKey libtrust.PrivateKey uploadManager *xfer.LayerUploadManager leases leases.Manager content content.Store contentNamespace string } // DistributionServices provides daemon image storage services type DistributionServices struct { DownloadManager distribution.RootFSDownloadManager V2MetadataService metadata.V2MetadataService LayerStore layer.Store ImageStore image.Store ReferenceStore dockerreference.Store } // DistributionServices return services controlling daemon image storage func (i *ImageService) DistributionServices() DistributionServices { return DistributionServices{ DownloadManager: i.downloadManager, V2MetadataService: metadata.NewV2MetadataService(i.distributionMetadataStore), LayerStore: i.layerStore, ImageStore: i.imageStore, ReferenceStore: i.referenceStore, } } // CountImages returns the number of images stored by ImageService // called from info.go func (i *ImageService) CountImages() int { return i.imageStore.Len() } // Children returns the children image.IDs for a parent image. // called from list.go to filter containers // TODO: refactor to expose an ancestry for image.ID? func (i *ImageService) Children(id image.ID) []image.ID { return i.imageStore.Children(id) } // CreateLayer creates a filesystem layer for a container. // called from create.go // TODO: accept an opt struct instead of container? func (i *ImageService) CreateLayer(container *container.Container, initFunc layer.MountInit) (layer.RWLayer, error) { var layerID layer.ChainID if container.ImageID != "" { img, err := i.imageStore.Get(container.ImageID) if err != nil { return nil, err } layerID = img.RootFS.ChainID() } rwLayerOpts := &layer.CreateRWLayerOpts{ MountLabel: container.MountLabel, InitFunc: initFunc, StorageOpt: container.HostConfig.StorageOpt, } // Indexing by OS is safe here as validation of OS has already been performed in create() (the only // caller), and guaranteed non-nil return i.layerStore.CreateRWLayer(container.ID, layerID, rwLayerOpts) } // GetLayerByID returns a layer by ID // called from daemon.go Daemon.restore(), and Daemon.containerExport() func (i *ImageService) GetLayerByID(cid string) (layer.RWLayer, error) { return i.layerStore.GetRWLayer(cid) } // LayerStoreStatus returns the status for each layer store // called from info.go func (i *ImageService) LayerStoreStatus() [][2]string { return i.layerStore.DriverStatus() } // GetLayerMountID returns the mount ID for a layer // called from daemon.go Daemon.Shutdown(), and Daemon.Cleanup() (cleanup is actually continerCleanup) // TODO: needs to be refactored to Unmount (see callers), or removed and replaced with GetLayerByID func (i *ImageService) GetLayerMountID(cid string) (string, error) { return i.layerStore.GetMountID(cid) } // Cleanup resources before the process is shutdown. // called from daemon.go Daemon.Shutdown() func (i *ImageService) Cleanup() { if err := i.layerStore.Cleanup(); err != nil { logrus.Errorf("Error during layer Store.Cleanup(): %v", err) } } // GraphDriverName returns the name of the graph drvier // moved from Daemon.GraphDriverName, used by: // - newContainer // - to report an error in Daemon.Mount(container) func (i *ImageService) GraphDriverName() string { return i.layerStore.DriverName() } // ReleaseLayer releases a layer allowing it to be removed // called from delete.go Daemon.cleanupContainer(), and Daemon.containerExport() func (i *ImageService) ReleaseLayer(rwlayer layer.RWLayer, containerOS string) error { metadata, err := i.layerStore.ReleaseRWLayer(rwlayer) layer.LogReleaseMetadata(metadata) if err != nil && !errors.Is(err, layer.ErrMountDoesNotExist) && !errors.Is(err, os.ErrNotExist) { return errors.Wrapf(err, "driver %q failed to remove root filesystem", i.layerStore.DriverName()) } return nil } // LayerDiskUsage returns the number of bytes used by layer stores // called from disk_usage.go func (i *ImageService) LayerDiskUsage(ctx context.Context) (int64, error) { var allLayersSize int64 layerRefs := i.getLayerRefs() allLayers := i.layerStore.Map() for _, l := range allLayers { select { case <-ctx.Done(): return allLayersSize, ctx.Err() default: size, err := l.DiffSize() if err == nil { if _, ok := layerRefs[l.ChainID()]; ok { allLayersSize += size } } else { logrus.Warnf("failed to get diff size for layer %v", l.ChainID()) } } } return allLayersSize, nil } func (i *ImageService) getLayerRefs() map[layer.ChainID]int { tmpImages := i.imageStore.Map() layerRefs := map[layer.ChainID]int{} for id, img := range tmpImages { dgst := digest.Digest(id) if len(i.referenceStore.References(dgst)) == 0 && len(i.imageStore.Children(id)) != 0 { continue } rootFS := *img.RootFS rootFS.DiffIDs = nil for _, id := range img.RootFS.DiffIDs { rootFS.Append(id) chid := rootFS.ChainID() layerRefs[chid]++ } } return layerRefs } // UpdateConfig values // // called from reload.go func (i *ImageService) UpdateConfig(maxDownloads, maxUploads *int) { if i.downloadManager != nil && maxDownloads != nil { i.downloadManager.SetConcurrency(*maxDownloads) } if i.uploadManager != nil && maxUploads != nil { i.uploadManager.SetConcurrency(*maxUploads) } }
package images // import "github.com/docker/docker/daemon/images" import ( "context" "fmt" "os" "github.com/containerd/containerd/content" "github.com/containerd/containerd/leases" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/container" daemonevents "github.com/docker/docker/daemon/events" "github.com/docker/docker/distribution" "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/distribution/xfer" "github.com/docker/docker/image" "github.com/docker/docker/layer" dockerreference "github.com/docker/docker/reference" "github.com/docker/docker/registry" "github.com/docker/libtrust" digest "github.com/opencontainers/go-digest" "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/singleflight" ) type containerStore interface { // used by image delete First(container.StoreFilter) *container.Container // used by image prune, and image list List() []*container.Container // TODO: remove, only used for CommitBuildStep Get(string) *container.Container } // ImageServiceConfig is the configuration used to create a new ImageService type ImageServiceConfig struct { ContainerStore containerStore DistributionMetadataStore metadata.Store EventsService *daemonevents.Events ImageStore image.Store LayerStore layer.Store MaxConcurrentDownloads int MaxConcurrentUploads int MaxDownloadAttempts int ReferenceStore dockerreference.Store RegistryService registry.Service TrustKey libtrust.PrivateKey ContentStore content.Store Leases leases.Manager ContentNamespace string } // NewImageService returns a new ImageService from a configuration func NewImageService(config ImageServiceConfig) *ImageService { logrus.Debugf("Max Concurrent Downloads: %d", config.MaxConcurrentDownloads) logrus.Debugf("Max Concurrent Uploads: %d", config.MaxConcurrentUploads) logrus.Debugf("Max Download Attempts: %d", config.MaxDownloadAttempts) return &ImageService{ containers: config.ContainerStore, distributionMetadataStore: config.DistributionMetadataStore, downloadManager: xfer.NewLayerDownloadManager(config.LayerStore, config.MaxConcurrentDownloads, xfer.WithMaxDownloadAttempts(config.MaxDownloadAttempts)), eventsService: config.EventsService, imageStore: &imageStoreWithLease{Store: config.ImageStore, leases: config.Leases, ns: config.ContentNamespace}, layerStore: config.LayerStore, referenceStore: config.ReferenceStore, registryService: config.RegistryService, trustKey: config.TrustKey, uploadManager: xfer.NewLayerUploadManager(config.MaxConcurrentUploads), leases: config.Leases, content: config.ContentStore, contentNamespace: config.ContentNamespace, } } // ImageService provides a backend for image management type ImageService struct { containers containerStore distributionMetadataStore metadata.Store downloadManager *xfer.LayerDownloadManager eventsService *daemonevents.Events imageStore image.Store layerStore layer.Store pruneRunning int32 referenceStore dockerreference.Store registryService registry.Service trustKey libtrust.PrivateKey uploadManager *xfer.LayerUploadManager leases leases.Manager content content.Store contentNamespace string usage singleflight.Group } // DistributionServices provides daemon image storage services type DistributionServices struct { DownloadManager distribution.RootFSDownloadManager V2MetadataService metadata.V2MetadataService LayerStore layer.Store ImageStore image.Store ReferenceStore dockerreference.Store } // DistributionServices return services controlling daemon image storage func (i *ImageService) DistributionServices() DistributionServices { return DistributionServices{ DownloadManager: i.downloadManager, V2MetadataService: metadata.NewV2MetadataService(i.distributionMetadataStore), LayerStore: i.layerStore, ImageStore: i.imageStore, ReferenceStore: i.referenceStore, } } // CountImages returns the number of images stored by ImageService // called from info.go func (i *ImageService) CountImages() int { return i.imageStore.Len() } // Children returns the children image.IDs for a parent image. // called from list.go to filter containers // TODO: refactor to expose an ancestry for image.ID? func (i *ImageService) Children(id image.ID) []image.ID { return i.imageStore.Children(id) } // CreateLayer creates a filesystem layer for a container. // called from create.go // TODO: accept an opt struct instead of container? func (i *ImageService) CreateLayer(container *container.Container, initFunc layer.MountInit) (layer.RWLayer, error) { var layerID layer.ChainID if container.ImageID != "" { img, err := i.imageStore.Get(container.ImageID) if err != nil { return nil, err } layerID = img.RootFS.ChainID() } rwLayerOpts := &layer.CreateRWLayerOpts{ MountLabel: container.MountLabel, InitFunc: initFunc, StorageOpt: container.HostConfig.StorageOpt, } // Indexing by OS is safe here as validation of OS has already been performed in create() (the only // caller), and guaranteed non-nil return i.layerStore.CreateRWLayer(container.ID, layerID, rwLayerOpts) } // GetLayerByID returns a layer by ID // called from daemon.go Daemon.restore(), and Daemon.containerExport() func (i *ImageService) GetLayerByID(cid string) (layer.RWLayer, error) { return i.layerStore.GetRWLayer(cid) } // LayerStoreStatus returns the status for each layer store // called from info.go func (i *ImageService) LayerStoreStatus() [][2]string { return i.layerStore.DriverStatus() } // GetLayerMountID returns the mount ID for a layer // called from daemon.go Daemon.Shutdown(), and Daemon.Cleanup() (cleanup is actually continerCleanup) // TODO: needs to be refactored to Unmount (see callers), or removed and replaced with GetLayerByID func (i *ImageService) GetLayerMountID(cid string) (string, error) { return i.layerStore.GetMountID(cid) } // Cleanup resources before the process is shutdown. // called from daemon.go Daemon.Shutdown() func (i *ImageService) Cleanup() { if err := i.layerStore.Cleanup(); err != nil { logrus.Errorf("Error during layer Store.Cleanup(): %v", err) } } // GraphDriverName returns the name of the graph drvier // moved from Daemon.GraphDriverName, used by: // - newContainer // - to report an error in Daemon.Mount(container) func (i *ImageService) GraphDriverName() string { return i.layerStore.DriverName() } // ReleaseLayer releases a layer allowing it to be removed // called from delete.go Daemon.cleanupContainer(), and Daemon.containerExport() func (i *ImageService) ReleaseLayer(rwlayer layer.RWLayer, containerOS string) error { metadata, err := i.layerStore.ReleaseRWLayer(rwlayer) layer.LogReleaseMetadata(metadata) if err != nil && !errors.Is(err, layer.ErrMountDoesNotExist) && !errors.Is(err, os.ErrNotExist) { return errors.Wrapf(err, "driver %q failed to remove root filesystem", i.layerStore.DriverName()) } return nil } // LayerDiskUsage returns the number of bytes used by layer stores // called from disk_usage.go func (i *ImageService) LayerDiskUsage(ctx context.Context) (int64, error) { ch := i.usage.DoChan("LayerDiskUsage", func() (interface{}, error) { var allLayersSize int64 layerRefs := i.getLayerRefs() allLayers := i.layerStore.Map() for _, l := range allLayers { select { case <-ctx.Done(): return allLayersSize, ctx.Err() default: size, err := l.DiffSize() if err == nil { if _, ok := layerRefs[l.ChainID()]; ok { allLayersSize += size } } else { logrus.Warnf("failed to get diff size for layer %v", l.ChainID()) } } } return allLayersSize, nil }) select { case <-ctx.Done(): return 0, ctx.Err() case res := <-ch: if res.Err != nil { return 0, res.Err } return res.Val.(int64), nil } } func (i *ImageService) getLayerRefs() map[layer.ChainID]int { tmpImages := i.imageStore.Map() layerRefs := map[layer.ChainID]int{} for id, img := range tmpImages { dgst := digest.Digest(id) if len(i.referenceStore.References(dgst)) == 0 && len(i.imageStore.Children(id)) != 0 { continue } rootFS := *img.RootFS rootFS.DiffIDs = nil for _, id := range img.RootFS.DiffIDs { rootFS.Append(id) chid := rootFS.ChainID() layerRefs[chid]++ } } return layerRefs } // ImageDiskUsage returns information about image data disk usage. func (i *ImageService) ImageDiskUsage(ctx context.Context) ([]*types.ImageSummary, error) { ch := i.usage.DoChan("ImageDiskUsage", func() (interface{}, error) { // Get all top images with extra attributes images, err := i.Images(ctx, types.ImageListOptions{ Filters: filters.NewArgs(), SharedSize: true, ContainerCount: true, }) if err != nil { return nil, fmt.Errorf("failed to retrieve image list: %v", err) } return images, nil }) select { case <-ctx.Done(): return nil, ctx.Err() case res := <-ch: if res.Err != nil { return nil, res.Err } return res.Val.([]*types.ImageSummary), nil } } // UpdateConfig values // // called from reload.go func (i *ImageService) UpdateConfig(maxDownloads, maxUploads *int) { if i.downloadManager != nil && maxDownloads != nil { i.downloadManager.SetConcurrency(*maxDownloads) } if i.uploadManager != nil && maxUploads != nil { i.uploadManager.SetConcurrency(*maxUploads) } }
rvolosatovs
919f2ef7641d2139211aafe12abbbf1f81689c01
b88acf7a7a571b0fe054b8315c79caa16b0c92df
Same here (consider inlining)
thaJeztah
4,513
moby/moby
42,715
Share disk usage computation results between concurrent invocations
**- What I did** Share disk usage computation results between concurrent invocations instead of throwing an error **- How I did it** - Use `x/sync/singleflight.Group`, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method. - Extract the disk usage computation functionality for containers and images for consistency with other object types and better separation of concerns. It also fits nicely with the current design. **- How to verify it** E.g. ``` docker system df& docker system df& docker system df ``` Or: ``` curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=container'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=volume'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=build-cache'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' ``` Such invocations do not error anymore, but just return the result once computed by one of the goroutines ** description for changeling** ```markdown The `GET /system/df` endpoint can now be used concurrently. If a request is made to the endpoint while a calculation is still running, the request will receive the result of the already running calculation, once completed. Previously, an error (`a disk usage operation is already running`) would be returned in this situation. ```
null
2021-08-06 15:04:00+00:00
2021-08-10 11:51:04+00:00
volume/service/service.go
package service // import "github.com/docker/docker/volume/service" import ( "context" "strconv" "sync/atomic" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/errdefs" "github.com/docker/docker/pkg/directory" "github.com/docker/docker/pkg/idtools" "github.com/docker/docker/pkg/plugingetter" "github.com/docker/docker/pkg/stringid" "github.com/docker/docker/volume" "github.com/docker/docker/volume/drivers" "github.com/docker/docker/volume/service/opts" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) type ds interface { GetDriverList() []string } // VolumeEventLogger interface provides methods to log volume-related events type VolumeEventLogger interface { // LogVolumeEvent generates an event related to a volume. LogVolumeEvent(volumeID, action string, attributes map[string]string) } // VolumesService manages access to volumes // This is used as the main access point for volumes to higher level services and the API. type VolumesService struct { vs *VolumeStore ds ds pruneRunning int32 eventLogger VolumeEventLogger } // NewVolumeService creates a new volume service func NewVolumeService(root string, pg plugingetter.PluginGetter, rootIDs idtools.Identity, logger VolumeEventLogger) (*VolumesService, error) { ds := drivers.NewStore(pg) if err := setupDefaultDriver(ds, root, rootIDs); err != nil { return nil, err } vs, err := NewStore(root, ds, WithEventLogger(logger)) if err != nil { return nil, err } return &VolumesService{vs: vs, ds: ds, eventLogger: logger}, nil } // GetDriverList gets the list of registered volume drivers func (s *VolumesService) GetDriverList() []string { return s.ds.GetDriverList() } // Create creates a volume // If the caller is creating this volume to be consumed immediately, it is // expected that the caller specifies a reference ID. // This reference ID will protect this volume from removal. // // A good example for a reference ID is a container's ID. // When whatever is going to reference this volume is removed the caller should defeference the volume by calling `Release`. func (s *VolumesService) Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*types.Volume, error) { if name == "" { name = stringid.GenerateRandomID() } v, err := s.vs.Create(ctx, name, driverName, opts...) if err != nil { return nil, err } apiV := volumeToAPIType(v) return &apiV, nil } // Get returns details about a volume func (s *VolumesService) Get(ctx context.Context, name string, getOpts ...opts.GetOption) (*types.Volume, error) { v, err := s.vs.Get(ctx, name, getOpts...) if err != nil { return nil, err } vol := volumeToAPIType(v) var cfg opts.GetConfig for _, o := range getOpts { o(&cfg) } if cfg.ResolveStatus { vol.Status = v.Status() } return &vol, nil } // Mount mounts the volume // Callers should specify a uniqe reference for each Mount/Unmount pair. // // Example: // ```go // mountID := "randomString" // s.Mount(ctx, vol, mountID) // s.Unmount(ctx, vol, mountID) // ``` func (s *VolumesService) Mount(ctx context.Context, vol *types.Volume, ref string) (string, error) { v, err := s.vs.Get(ctx, vol.Name, opts.WithGetDriver(vol.Driver)) if err != nil { if IsNotExist(err) { err = errdefs.NotFound(err) } return "", err } return v.Mount(ref) } // Unmount unmounts the volume. // Note that depending on the implementation, the volume may still be mounted due to other resources using it. // // The reference specified here should be the same reference specified during `Mount` and should be // unique for each mount/unmount pair. // See `Mount` documentation for an example. func (s *VolumesService) Unmount(ctx context.Context, vol *types.Volume, ref string) error { v, err := s.vs.Get(ctx, vol.Name, opts.WithGetDriver(vol.Driver)) if err != nil { if IsNotExist(err) { err = errdefs.NotFound(err) } return err } return v.Unmount(ref) } // Release releases a volume reference func (s *VolumesService) Release(ctx context.Context, name string, ref string) error { return s.vs.Release(ctx, name, ref) } // Remove removes a volume // An error is returned if the volume is still referenced. func (s *VolumesService) Remove(ctx context.Context, name string, rmOpts ...opts.RemoveOption) error { var cfg opts.RemoveConfig for _, o := range rmOpts { o(&cfg) } v, err := s.vs.Get(ctx, name) if err != nil { if IsNotExist(err) && cfg.PurgeOnError { return nil } return err } err = s.vs.Remove(ctx, v, rmOpts...) if IsNotExist(err) { err = nil } else if IsInUse(err) { err = errdefs.Conflict(err) } else if IsNotExist(err) && cfg.PurgeOnError { err = nil } return err } var acceptedPruneFilters = map[string]bool{ "label": true, "label!": true, } var acceptedListFilters = map[string]bool{ "dangling": true, "name": true, "driver": true, "label": true, } // LocalVolumesSize gets all local volumes and fetches their size on disk // Note that this intentionally skips volumes which have mount options. Typically // volumes with mount options are not really local even if they are using the // local driver. func (s *VolumesService) LocalVolumesSize(ctx context.Context) ([]*types.Volume, error) { ls, _, err := s.vs.Find(ctx, And(ByDriver(volume.DefaultDriverName), CustomFilter(func(v volume.Volume) bool { dv, ok := v.(volume.DetailedVolume) return ok && len(dv.Options()) == 0 }))) if err != nil { return nil, err } return s.volumesToAPI(ctx, ls, calcSize(true)), nil } // Prune removes (local) volumes which match the past in filter arguments. // Note that this intentionally skips volumes with mount options as there would // be no space reclaimed in this case. func (s *VolumesService) Prune(ctx context.Context, filter filters.Args) (*types.VolumesPruneReport, error) { if !atomic.CompareAndSwapInt32(&s.pruneRunning, 0, 1) { return nil, errdefs.Conflict(errors.New("a prune operation is already running")) } defer atomic.StoreInt32(&s.pruneRunning, 0) by, err := filtersToBy(filter, acceptedPruneFilters) if err != nil { return nil, err } ls, _, err := s.vs.Find(ctx, And(ByDriver(volume.DefaultDriverName), ByReferenced(false), by, CustomFilter(func(v volume.Volume) bool { dv, ok := v.(volume.DetailedVolume) return ok && len(dv.Options()) == 0 }))) if err != nil { return nil, err } rep := &types.VolumesPruneReport{VolumesDeleted: make([]string, 0, len(ls))} for _, v := range ls { select { case <-ctx.Done(): err := ctx.Err() if err == context.Canceled { err = nil } return rep, err default: } vSize, err := directory.Size(ctx, v.Path()) if err != nil { logrus.WithField("volume", v.Name()).WithError(err).Warn("could not determine size of volume") } if err := s.vs.Remove(ctx, v); err != nil { logrus.WithError(err).WithField("volume", v.Name()).Warnf("Could not determine size of volume") continue } rep.SpaceReclaimed += uint64(vSize) rep.VolumesDeleted = append(rep.VolumesDeleted, v.Name()) } s.eventLogger.LogVolumeEvent("", "prune", map[string]string{ "reclaimed": strconv.FormatInt(int64(rep.SpaceReclaimed), 10), }) return rep, nil } // List gets the list of volumes which match the past in filters // If filters is nil or empty all volumes are returned. func (s *VolumesService) List(ctx context.Context, filter filters.Args) (volumesOut []*types.Volume, warnings []string, err error) { by, err := filtersToBy(filter, acceptedListFilters) if err != nil { return nil, nil, err } volumes, warnings, err := s.vs.Find(ctx, by) if err != nil { return nil, nil, err } return s.volumesToAPI(ctx, volumes, useCachedPath(true)), warnings, nil } // Shutdown shuts down the image service and dependencies func (s *VolumesService) Shutdown() error { return s.vs.Shutdown() }
package service // import "github.com/docker/docker/volume/service" import ( "context" "strconv" "sync/atomic" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/errdefs" "github.com/docker/docker/pkg/directory" "github.com/docker/docker/pkg/idtools" "github.com/docker/docker/pkg/plugingetter" "github.com/docker/docker/pkg/stringid" "github.com/docker/docker/volume" "github.com/docker/docker/volume/drivers" "github.com/docker/docker/volume/service/opts" "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/singleflight" ) type ds interface { GetDriverList() []string } // VolumeEventLogger interface provides methods to log volume-related events type VolumeEventLogger interface { // LogVolumeEvent generates an event related to a volume. LogVolumeEvent(volumeID, action string, attributes map[string]string) } // VolumesService manages access to volumes // This is used as the main access point for volumes to higher level services and the API. type VolumesService struct { vs *VolumeStore ds ds pruneRunning int32 eventLogger VolumeEventLogger usage singleflight.Group } // NewVolumeService creates a new volume service func NewVolumeService(root string, pg plugingetter.PluginGetter, rootIDs idtools.Identity, logger VolumeEventLogger) (*VolumesService, error) { ds := drivers.NewStore(pg) if err := setupDefaultDriver(ds, root, rootIDs); err != nil { return nil, err } vs, err := NewStore(root, ds, WithEventLogger(logger)) if err != nil { return nil, err } return &VolumesService{vs: vs, ds: ds, eventLogger: logger}, nil } // GetDriverList gets the list of registered volume drivers func (s *VolumesService) GetDriverList() []string { return s.ds.GetDriverList() } // Create creates a volume // If the caller is creating this volume to be consumed immediately, it is // expected that the caller specifies a reference ID. // This reference ID will protect this volume from removal. // // A good example for a reference ID is a container's ID. // When whatever is going to reference this volume is removed the caller should defeference the volume by calling `Release`. func (s *VolumesService) Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*types.Volume, error) { if name == "" { name = stringid.GenerateRandomID() } v, err := s.vs.Create(ctx, name, driverName, opts...) if err != nil { return nil, err } apiV := volumeToAPIType(v) return &apiV, nil } // Get returns details about a volume func (s *VolumesService) Get(ctx context.Context, name string, getOpts ...opts.GetOption) (*types.Volume, error) { v, err := s.vs.Get(ctx, name, getOpts...) if err != nil { return nil, err } vol := volumeToAPIType(v) var cfg opts.GetConfig for _, o := range getOpts { o(&cfg) } if cfg.ResolveStatus { vol.Status = v.Status() } return &vol, nil } // Mount mounts the volume // Callers should specify a uniqe reference for each Mount/Unmount pair. // // Example: // ```go // mountID := "randomString" // s.Mount(ctx, vol, mountID) // s.Unmount(ctx, vol, mountID) // ``` func (s *VolumesService) Mount(ctx context.Context, vol *types.Volume, ref string) (string, error) { v, err := s.vs.Get(ctx, vol.Name, opts.WithGetDriver(vol.Driver)) if err != nil { if IsNotExist(err) { err = errdefs.NotFound(err) } return "", err } return v.Mount(ref) } // Unmount unmounts the volume. // Note that depending on the implementation, the volume may still be mounted due to other resources using it. // // The reference specified here should be the same reference specified during `Mount` and should be // unique for each mount/unmount pair. // See `Mount` documentation for an example. func (s *VolumesService) Unmount(ctx context.Context, vol *types.Volume, ref string) error { v, err := s.vs.Get(ctx, vol.Name, opts.WithGetDriver(vol.Driver)) if err != nil { if IsNotExist(err) { err = errdefs.NotFound(err) } return err } return v.Unmount(ref) } // Release releases a volume reference func (s *VolumesService) Release(ctx context.Context, name string, ref string) error { return s.vs.Release(ctx, name, ref) } // Remove removes a volume // An error is returned if the volume is still referenced. func (s *VolumesService) Remove(ctx context.Context, name string, rmOpts ...opts.RemoveOption) error { var cfg opts.RemoveConfig for _, o := range rmOpts { o(&cfg) } v, err := s.vs.Get(ctx, name) if err != nil { if IsNotExist(err) && cfg.PurgeOnError { return nil } return err } err = s.vs.Remove(ctx, v, rmOpts...) if IsNotExist(err) { err = nil } else if IsInUse(err) { err = errdefs.Conflict(err) } else if IsNotExist(err) && cfg.PurgeOnError { err = nil } return err } var acceptedPruneFilters = map[string]bool{ "label": true, "label!": true, } var acceptedListFilters = map[string]bool{ "dangling": true, "name": true, "driver": true, "label": true, } // LocalVolumesSize gets all local volumes and fetches their size on disk // Note that this intentionally skips volumes which have mount options. Typically // volumes with mount options are not really local even if they are using the // local driver. func (s *VolumesService) LocalVolumesSize(ctx context.Context) ([]*types.Volume, error) { ch := s.usage.DoChan("LocalVolumesSize", func() (interface{}, error) { ls, _, err := s.vs.Find(ctx, And(ByDriver(volume.DefaultDriverName), CustomFilter(func(v volume.Volume) bool { dv, ok := v.(volume.DetailedVolume) return ok && len(dv.Options()) == 0 }))) if err != nil { return nil, err } return s.volumesToAPI(ctx, ls, calcSize(true)), nil }) select { case <-ctx.Done(): return nil, ctx.Err() case res := <-ch: if res.Err != nil { return nil, res.Err } return res.Val.([]*types.Volume), nil } } // Prune removes (local) volumes which match the past in filter arguments. // Note that this intentionally skips volumes with mount options as there would // be no space reclaimed in this case. func (s *VolumesService) Prune(ctx context.Context, filter filters.Args) (*types.VolumesPruneReport, error) { if !atomic.CompareAndSwapInt32(&s.pruneRunning, 0, 1) { return nil, errdefs.Conflict(errors.New("a prune operation is already running")) } defer atomic.StoreInt32(&s.pruneRunning, 0) by, err := filtersToBy(filter, acceptedPruneFilters) if err != nil { return nil, err } ls, _, err := s.vs.Find(ctx, And(ByDriver(volume.DefaultDriverName), ByReferenced(false), by, CustomFilter(func(v volume.Volume) bool { dv, ok := v.(volume.DetailedVolume) return ok && len(dv.Options()) == 0 }))) if err != nil { return nil, err } rep := &types.VolumesPruneReport{VolumesDeleted: make([]string, 0, len(ls))} for _, v := range ls { select { case <-ctx.Done(): err := ctx.Err() if err == context.Canceled { err = nil } return rep, err default: } vSize, err := directory.Size(ctx, v.Path()) if err != nil { logrus.WithField("volume", v.Name()).WithError(err).Warn("could not determine size of volume") } if err := s.vs.Remove(ctx, v); err != nil { logrus.WithError(err).WithField("volume", v.Name()).Warnf("Could not determine size of volume") continue } rep.SpaceReclaimed += uint64(vSize) rep.VolumesDeleted = append(rep.VolumesDeleted, v.Name()) } s.eventLogger.LogVolumeEvent("", "prune", map[string]string{ "reclaimed": strconv.FormatInt(int64(rep.SpaceReclaimed), 10), }) return rep, nil } // List gets the list of volumes which match the past in filters // If filters is nil or empty all volumes are returned. func (s *VolumesService) List(ctx context.Context, filter filters.Args) (volumesOut []*types.Volume, warnings []string, err error) { by, err := filtersToBy(filter, acceptedListFilters) if err != nil { return nil, nil, err } volumes, warnings, err := s.vs.Find(ctx, by) if err != nil { return nil, nil, err } return s.volumesToAPI(ctx, volumes, useCachedPath(true)), warnings, nil } // Shutdown shuts down the image service and dependencies func (s *VolumesService) Shutdown() error { return s.vs.Shutdown() }
rvolosatovs
919f2ef7641d2139211aafe12abbbf1f81689c01
b88acf7a7a571b0fe054b8315c79caa16b0c92df
perhaps just `singleFlight` (slightly shorter, and reduces reformatting/reflowing other lines) Or (if we will be implement the same for _pruning_) perhaps make the singleFlight more descriptive to what it's for. e.g. `usage` (which would then be called as `usage.DoChan()`), then we can use `prune` for the other one (`prune.DoChan()`) Same for the other `singleFlightGroup`s
thaJeztah
4,514
moby/moby
42,715
Share disk usage computation results between concurrent invocations
**- What I did** Share disk usage computation results between concurrent invocations instead of throwing an error **- How I did it** - Use `x/sync/singleflight.Group`, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method. - Extract the disk usage computation functionality for containers and images for consistency with other object types and better separation of concerns. It also fits nicely with the current design. **- How to verify it** E.g. ``` docker system df& docker system df& docker system df ``` Or: ``` curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=container'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=volume'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=build-cache'& curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' ``` Such invocations do not error anymore, but just return the result once computed by one of the goroutines ** description for changeling** ```markdown The `GET /system/df` endpoint can now be used concurrently. If a request is made to the endpoint while a calculation is still running, the request will receive the result of the already running calculation, once completed. Previously, an error (`a disk usage operation is already running`) would be returned in this situation. ```
null
2021-08-06 15:04:00+00:00
2021-08-10 11:51:04+00:00
volume/service/service.go
package service // import "github.com/docker/docker/volume/service" import ( "context" "strconv" "sync/atomic" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/errdefs" "github.com/docker/docker/pkg/directory" "github.com/docker/docker/pkg/idtools" "github.com/docker/docker/pkg/plugingetter" "github.com/docker/docker/pkg/stringid" "github.com/docker/docker/volume" "github.com/docker/docker/volume/drivers" "github.com/docker/docker/volume/service/opts" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) type ds interface { GetDriverList() []string } // VolumeEventLogger interface provides methods to log volume-related events type VolumeEventLogger interface { // LogVolumeEvent generates an event related to a volume. LogVolumeEvent(volumeID, action string, attributes map[string]string) } // VolumesService manages access to volumes // This is used as the main access point for volumes to higher level services and the API. type VolumesService struct { vs *VolumeStore ds ds pruneRunning int32 eventLogger VolumeEventLogger } // NewVolumeService creates a new volume service func NewVolumeService(root string, pg plugingetter.PluginGetter, rootIDs idtools.Identity, logger VolumeEventLogger) (*VolumesService, error) { ds := drivers.NewStore(pg) if err := setupDefaultDriver(ds, root, rootIDs); err != nil { return nil, err } vs, err := NewStore(root, ds, WithEventLogger(logger)) if err != nil { return nil, err } return &VolumesService{vs: vs, ds: ds, eventLogger: logger}, nil } // GetDriverList gets the list of registered volume drivers func (s *VolumesService) GetDriverList() []string { return s.ds.GetDriverList() } // Create creates a volume // If the caller is creating this volume to be consumed immediately, it is // expected that the caller specifies a reference ID. // This reference ID will protect this volume from removal. // // A good example for a reference ID is a container's ID. // When whatever is going to reference this volume is removed the caller should defeference the volume by calling `Release`. func (s *VolumesService) Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*types.Volume, error) { if name == "" { name = stringid.GenerateRandomID() } v, err := s.vs.Create(ctx, name, driverName, opts...) if err != nil { return nil, err } apiV := volumeToAPIType(v) return &apiV, nil } // Get returns details about a volume func (s *VolumesService) Get(ctx context.Context, name string, getOpts ...opts.GetOption) (*types.Volume, error) { v, err := s.vs.Get(ctx, name, getOpts...) if err != nil { return nil, err } vol := volumeToAPIType(v) var cfg opts.GetConfig for _, o := range getOpts { o(&cfg) } if cfg.ResolveStatus { vol.Status = v.Status() } return &vol, nil } // Mount mounts the volume // Callers should specify a uniqe reference for each Mount/Unmount pair. // // Example: // ```go // mountID := "randomString" // s.Mount(ctx, vol, mountID) // s.Unmount(ctx, vol, mountID) // ``` func (s *VolumesService) Mount(ctx context.Context, vol *types.Volume, ref string) (string, error) { v, err := s.vs.Get(ctx, vol.Name, opts.WithGetDriver(vol.Driver)) if err != nil { if IsNotExist(err) { err = errdefs.NotFound(err) } return "", err } return v.Mount(ref) } // Unmount unmounts the volume. // Note that depending on the implementation, the volume may still be mounted due to other resources using it. // // The reference specified here should be the same reference specified during `Mount` and should be // unique for each mount/unmount pair. // See `Mount` documentation for an example. func (s *VolumesService) Unmount(ctx context.Context, vol *types.Volume, ref string) error { v, err := s.vs.Get(ctx, vol.Name, opts.WithGetDriver(vol.Driver)) if err != nil { if IsNotExist(err) { err = errdefs.NotFound(err) } return err } return v.Unmount(ref) } // Release releases a volume reference func (s *VolumesService) Release(ctx context.Context, name string, ref string) error { return s.vs.Release(ctx, name, ref) } // Remove removes a volume // An error is returned if the volume is still referenced. func (s *VolumesService) Remove(ctx context.Context, name string, rmOpts ...opts.RemoveOption) error { var cfg opts.RemoveConfig for _, o := range rmOpts { o(&cfg) } v, err := s.vs.Get(ctx, name) if err != nil { if IsNotExist(err) && cfg.PurgeOnError { return nil } return err } err = s.vs.Remove(ctx, v, rmOpts...) if IsNotExist(err) { err = nil } else if IsInUse(err) { err = errdefs.Conflict(err) } else if IsNotExist(err) && cfg.PurgeOnError { err = nil } return err } var acceptedPruneFilters = map[string]bool{ "label": true, "label!": true, } var acceptedListFilters = map[string]bool{ "dangling": true, "name": true, "driver": true, "label": true, } // LocalVolumesSize gets all local volumes and fetches their size on disk // Note that this intentionally skips volumes which have mount options. Typically // volumes with mount options are not really local even if they are using the // local driver. func (s *VolumesService) LocalVolumesSize(ctx context.Context) ([]*types.Volume, error) { ls, _, err := s.vs.Find(ctx, And(ByDriver(volume.DefaultDriverName), CustomFilter(func(v volume.Volume) bool { dv, ok := v.(volume.DetailedVolume) return ok && len(dv.Options()) == 0 }))) if err != nil { return nil, err } return s.volumesToAPI(ctx, ls, calcSize(true)), nil } // Prune removes (local) volumes which match the past in filter arguments. // Note that this intentionally skips volumes with mount options as there would // be no space reclaimed in this case. func (s *VolumesService) Prune(ctx context.Context, filter filters.Args) (*types.VolumesPruneReport, error) { if !atomic.CompareAndSwapInt32(&s.pruneRunning, 0, 1) { return nil, errdefs.Conflict(errors.New("a prune operation is already running")) } defer atomic.StoreInt32(&s.pruneRunning, 0) by, err := filtersToBy(filter, acceptedPruneFilters) if err != nil { return nil, err } ls, _, err := s.vs.Find(ctx, And(ByDriver(volume.DefaultDriverName), ByReferenced(false), by, CustomFilter(func(v volume.Volume) bool { dv, ok := v.(volume.DetailedVolume) return ok && len(dv.Options()) == 0 }))) if err != nil { return nil, err } rep := &types.VolumesPruneReport{VolumesDeleted: make([]string, 0, len(ls))} for _, v := range ls { select { case <-ctx.Done(): err := ctx.Err() if err == context.Canceled { err = nil } return rep, err default: } vSize, err := directory.Size(ctx, v.Path()) if err != nil { logrus.WithField("volume", v.Name()).WithError(err).Warn("could not determine size of volume") } if err := s.vs.Remove(ctx, v); err != nil { logrus.WithError(err).WithField("volume", v.Name()).Warnf("Could not determine size of volume") continue } rep.SpaceReclaimed += uint64(vSize) rep.VolumesDeleted = append(rep.VolumesDeleted, v.Name()) } s.eventLogger.LogVolumeEvent("", "prune", map[string]string{ "reclaimed": strconv.FormatInt(int64(rep.SpaceReclaimed), 10), }) return rep, nil } // List gets the list of volumes which match the past in filters // If filters is nil or empty all volumes are returned. func (s *VolumesService) List(ctx context.Context, filter filters.Args) (volumesOut []*types.Volume, warnings []string, err error) { by, err := filtersToBy(filter, acceptedListFilters) if err != nil { return nil, nil, err } volumes, warnings, err := s.vs.Find(ctx, by) if err != nil { return nil, nil, err } return s.volumesToAPI(ctx, volumes, useCachedPath(true)), warnings, nil } // Shutdown shuts down the image service and dependencies func (s *VolumesService) Shutdown() error { return s.vs.Shutdown() }
package service // import "github.com/docker/docker/volume/service" import ( "context" "strconv" "sync/atomic" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/errdefs" "github.com/docker/docker/pkg/directory" "github.com/docker/docker/pkg/idtools" "github.com/docker/docker/pkg/plugingetter" "github.com/docker/docker/pkg/stringid" "github.com/docker/docker/volume" "github.com/docker/docker/volume/drivers" "github.com/docker/docker/volume/service/opts" "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/singleflight" ) type ds interface { GetDriverList() []string } // VolumeEventLogger interface provides methods to log volume-related events type VolumeEventLogger interface { // LogVolumeEvent generates an event related to a volume. LogVolumeEvent(volumeID, action string, attributes map[string]string) } // VolumesService manages access to volumes // This is used as the main access point for volumes to higher level services and the API. type VolumesService struct { vs *VolumeStore ds ds pruneRunning int32 eventLogger VolumeEventLogger usage singleflight.Group } // NewVolumeService creates a new volume service func NewVolumeService(root string, pg plugingetter.PluginGetter, rootIDs idtools.Identity, logger VolumeEventLogger) (*VolumesService, error) { ds := drivers.NewStore(pg) if err := setupDefaultDriver(ds, root, rootIDs); err != nil { return nil, err } vs, err := NewStore(root, ds, WithEventLogger(logger)) if err != nil { return nil, err } return &VolumesService{vs: vs, ds: ds, eventLogger: logger}, nil } // GetDriverList gets the list of registered volume drivers func (s *VolumesService) GetDriverList() []string { return s.ds.GetDriverList() } // Create creates a volume // If the caller is creating this volume to be consumed immediately, it is // expected that the caller specifies a reference ID. // This reference ID will protect this volume from removal. // // A good example for a reference ID is a container's ID. // When whatever is going to reference this volume is removed the caller should defeference the volume by calling `Release`. func (s *VolumesService) Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*types.Volume, error) { if name == "" { name = stringid.GenerateRandomID() } v, err := s.vs.Create(ctx, name, driverName, opts...) if err != nil { return nil, err } apiV := volumeToAPIType(v) return &apiV, nil } // Get returns details about a volume func (s *VolumesService) Get(ctx context.Context, name string, getOpts ...opts.GetOption) (*types.Volume, error) { v, err := s.vs.Get(ctx, name, getOpts...) if err != nil { return nil, err } vol := volumeToAPIType(v) var cfg opts.GetConfig for _, o := range getOpts { o(&cfg) } if cfg.ResolveStatus { vol.Status = v.Status() } return &vol, nil } // Mount mounts the volume // Callers should specify a uniqe reference for each Mount/Unmount pair. // // Example: // ```go // mountID := "randomString" // s.Mount(ctx, vol, mountID) // s.Unmount(ctx, vol, mountID) // ``` func (s *VolumesService) Mount(ctx context.Context, vol *types.Volume, ref string) (string, error) { v, err := s.vs.Get(ctx, vol.Name, opts.WithGetDriver(vol.Driver)) if err != nil { if IsNotExist(err) { err = errdefs.NotFound(err) } return "", err } return v.Mount(ref) } // Unmount unmounts the volume. // Note that depending on the implementation, the volume may still be mounted due to other resources using it. // // The reference specified here should be the same reference specified during `Mount` and should be // unique for each mount/unmount pair. // See `Mount` documentation for an example. func (s *VolumesService) Unmount(ctx context.Context, vol *types.Volume, ref string) error { v, err := s.vs.Get(ctx, vol.Name, opts.WithGetDriver(vol.Driver)) if err != nil { if IsNotExist(err) { err = errdefs.NotFound(err) } return err } return v.Unmount(ref) } // Release releases a volume reference func (s *VolumesService) Release(ctx context.Context, name string, ref string) error { return s.vs.Release(ctx, name, ref) } // Remove removes a volume // An error is returned if the volume is still referenced. func (s *VolumesService) Remove(ctx context.Context, name string, rmOpts ...opts.RemoveOption) error { var cfg opts.RemoveConfig for _, o := range rmOpts { o(&cfg) } v, err := s.vs.Get(ctx, name) if err != nil { if IsNotExist(err) && cfg.PurgeOnError { return nil } return err } err = s.vs.Remove(ctx, v, rmOpts...) if IsNotExist(err) { err = nil } else if IsInUse(err) { err = errdefs.Conflict(err) } else if IsNotExist(err) && cfg.PurgeOnError { err = nil } return err } var acceptedPruneFilters = map[string]bool{ "label": true, "label!": true, } var acceptedListFilters = map[string]bool{ "dangling": true, "name": true, "driver": true, "label": true, } // LocalVolumesSize gets all local volumes and fetches their size on disk // Note that this intentionally skips volumes which have mount options. Typically // volumes with mount options are not really local even if they are using the // local driver. func (s *VolumesService) LocalVolumesSize(ctx context.Context) ([]*types.Volume, error) { ch := s.usage.DoChan("LocalVolumesSize", func() (interface{}, error) { ls, _, err := s.vs.Find(ctx, And(ByDriver(volume.DefaultDriverName), CustomFilter(func(v volume.Volume) bool { dv, ok := v.(volume.DetailedVolume) return ok && len(dv.Options()) == 0 }))) if err != nil { return nil, err } return s.volumesToAPI(ctx, ls, calcSize(true)), nil }) select { case <-ctx.Done(): return nil, ctx.Err() case res := <-ch: if res.Err != nil { return nil, res.Err } return res.Val.([]*types.Volume), nil } } // Prune removes (local) volumes which match the past in filter arguments. // Note that this intentionally skips volumes with mount options as there would // be no space reclaimed in this case. func (s *VolumesService) Prune(ctx context.Context, filter filters.Args) (*types.VolumesPruneReport, error) { if !atomic.CompareAndSwapInt32(&s.pruneRunning, 0, 1) { return nil, errdefs.Conflict(errors.New("a prune operation is already running")) } defer atomic.StoreInt32(&s.pruneRunning, 0) by, err := filtersToBy(filter, acceptedPruneFilters) if err != nil { return nil, err } ls, _, err := s.vs.Find(ctx, And(ByDriver(volume.DefaultDriverName), ByReferenced(false), by, CustomFilter(func(v volume.Volume) bool { dv, ok := v.(volume.DetailedVolume) return ok && len(dv.Options()) == 0 }))) if err != nil { return nil, err } rep := &types.VolumesPruneReport{VolumesDeleted: make([]string, 0, len(ls))} for _, v := range ls { select { case <-ctx.Done(): err := ctx.Err() if err == context.Canceled { err = nil } return rep, err default: } vSize, err := directory.Size(ctx, v.Path()) if err != nil { logrus.WithField("volume", v.Name()).WithError(err).Warn("could not determine size of volume") } if err := s.vs.Remove(ctx, v); err != nil { logrus.WithError(err).WithField("volume", v.Name()).Warnf("Could not determine size of volume") continue } rep.SpaceReclaimed += uint64(vSize) rep.VolumesDeleted = append(rep.VolumesDeleted, v.Name()) } s.eventLogger.LogVolumeEvent("", "prune", map[string]string{ "reclaimed": strconv.FormatInt(int64(rep.SpaceReclaimed), 10), }) return rep, nil } // List gets the list of volumes which match the past in filters // If filters is nil or empty all volumes are returned. func (s *VolumesService) List(ctx context.Context, filter filters.Args) (volumesOut []*types.Volume, warnings []string, err error) { by, err := filtersToBy(filter, acceptedListFilters) if err != nil { return nil, nil, err } volumes, warnings, err := s.vs.Find(ctx, by) if err != nil { return nil, nil, err } return s.volumesToAPI(ctx, volumes, useCachedPath(true)), warnings, nil } // Shutdown shuts down the image service and dependencies func (s *VolumesService) Shutdown() error { return s.vs.Shutdown() }
rvolosatovs
919f2ef7641d2139211aafe12abbbf1f81689c01
b88acf7a7a571b0fe054b8315c79caa16b0c92df
same here (consider inlining)
thaJeztah
4,515
moby/moby
42,683
Remove LCOW (step 6)
Splitting off more bits from https://github.com/moby/moby/pull/42170
null
2021-07-27 11:33:51+00:00
2021-07-29 18:34:29+00:00
distribution/pull_v2.go
package distribution // import "github.com/docker/docker/distribution" import ( "context" "encoding/json" "fmt" "io" "io/ioutil" "os" "runtime" "strings" "github.com/containerd/containerd/log" "github.com/containerd/containerd/platforms" "github.com/docker/distribution" "github.com/docker/distribution/manifest/manifestlist" "github.com/docker/distribution/manifest/ocischema" "github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/reference" "github.com/docker/distribution/registry/client/transport" "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/distribution/xfer" "github.com/docker/docker/image" v1 "github.com/docker/docker/image/v1" "github.com/docker/docker/layer" "github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/progress" "github.com/docker/docker/pkg/stringid" "github.com/docker/docker/pkg/system" refstore "github.com/docker/docker/reference" "github.com/docker/docker/registry" digest "github.com/opencontainers/go-digest" specs "github.com/opencontainers/image-spec/specs-go/v1" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) var ( errRootFSMismatch = errors.New("layers from manifest don't match image configuration") errRootFSInvalid = errors.New("invalid rootfs in image configuration") ) // ImageConfigPullError is an error pulling the image config blob // (only applies to schema2). type ImageConfigPullError struct { Err error } // Error returns the error string for ImageConfigPullError. func (e ImageConfigPullError) Error() string { return "error pulling image configuration: " + e.Err.Error() } type v2Puller struct { V2MetadataService metadata.V2MetadataService endpoint registry.APIEndpoint config *ImagePullConfig repoInfo *registry.RepositoryInfo repo distribution.Repository manifestStore *manifestStore } func (p *v2Puller) Pull(ctx context.Context, ref reference.Named, platform *specs.Platform) (err error) { // TODO(tiborvass): was ReceiveTimeout p.repo, err = NewV2Repository(ctx, p.repoInfo, p.endpoint, p.config.MetaHeaders, p.config.AuthConfig, "pull") if err != nil { logrus.Warnf("Error getting v2 registry: %v", err) return err } p.manifestStore.remote, err = p.repo.Manifests(ctx) if err != nil { return err } if err = p.pullV2Repository(ctx, ref, platform); err != nil { if _, ok := err.(fallbackError); ok { return err } if continueOnError(err, p.endpoint.Mirror) { return fallbackError{ err: err, transportOK: true, } } } return err } func (p *v2Puller) pullV2Repository(ctx context.Context, ref reference.Named, platform *specs.Platform) (err error) { var layersDownloaded bool if !reference.IsNameOnly(ref) { layersDownloaded, err = p.pullV2Tag(ctx, ref, platform) if err != nil { return err } } else { tags, err := p.repo.Tags(ctx).All(ctx) if err != nil { return err } for _, tag := range tags { tagRef, err := reference.WithTag(ref, tag) if err != nil { return err } pulledNew, err := p.pullV2Tag(ctx, tagRef, platform) if err != nil { // Since this is the pull-all-tags case, don't // allow an error pulling a particular tag to // make the whole pull fall back to v1. if fallbackErr, ok := err.(fallbackError); ok { return fallbackErr.err } return err } // pulledNew is true if either new layers were downloaded OR if existing images were newly tagged // TODO(tiborvass): should we change the name of `layersDownload`? What about message in WriteStatus? layersDownloaded = layersDownloaded || pulledNew } } writeStatus(reference.FamiliarString(ref), p.config.ProgressOutput, layersDownloaded) return nil } type v2LayerDescriptor struct { digest digest.Digest diffID layer.DiffID repoInfo *registry.RepositoryInfo repo distribution.Repository V2MetadataService metadata.V2MetadataService tmpFile *os.File verifier digest.Verifier src distribution.Descriptor } func (ld *v2LayerDescriptor) Key() string { return "v2:" + ld.digest.String() } func (ld *v2LayerDescriptor) ID() string { return stringid.TruncateID(ld.digest.String()) } func (ld *v2LayerDescriptor) DiffID() (layer.DiffID, error) { if ld.diffID != "" { return ld.diffID, nil } return ld.V2MetadataService.GetDiffID(ld.digest) } func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progress.Output) (io.ReadCloser, int64, error) { logrus.Debugf("pulling blob %q", ld.digest) var ( err error offset int64 ) if ld.tmpFile == nil { ld.tmpFile, err = createDownloadFile() if err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } else { offset, err = ld.tmpFile.Seek(0, io.SeekEnd) if err != nil { logrus.Debugf("error seeking to end of download file: %v", err) offset = 0 ld.tmpFile.Close() if err := os.Remove(ld.tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", ld.tmpFile.Name()) } ld.tmpFile, err = createDownloadFile() if err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } else if offset != 0 { logrus.Debugf("attempting to resume download of %q from %d bytes", ld.digest, offset) } } tmpFile := ld.tmpFile layerDownload, err := ld.open(ctx) if err != nil { logrus.Errorf("Error initiating layer download: %v", err) return nil, 0, retryOnError(err) } if offset != 0 { _, err := layerDownload.Seek(offset, io.SeekStart) if err != nil { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } } size, err := layerDownload.Seek(0, io.SeekEnd) if err != nil { // Seek failed, perhaps because there was no Content-Length // header. This shouldn't fail the download, because we can // still continue without a progress bar. size = 0 } else { if size != 0 && offset > size { logrus.Debug("Partial download is larger than full blob. Starting over") offset = 0 if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } // Restore the seek offset either at the beginning of the // stream, or just after the last byte we have from previous // attempts. _, err = layerDownload.Seek(offset, io.SeekStart) if err != nil { return nil, 0, err } } reader := progress.NewProgressReader(ioutils.NewCancelReadCloser(ctx, layerDownload), progressOutput, size-offset, ld.ID(), "Downloading") defer reader.Close() if ld.verifier == nil { ld.verifier = ld.digest.Verifier() } _, err = io.Copy(tmpFile, io.TeeReader(reader, ld.verifier)) if err != nil { if err == transport.ErrWrongCodeForByteRange { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } return nil, 0, retryOnError(err) } progress.Update(progressOutput, ld.ID(), "Verifying Checksum") if !ld.verifier.Verified() { err = fmt.Errorf("filesystem layer verification failed for digest %s", ld.digest) logrus.Error(err) // Allow a retry if this digest verification error happened // after a resumed download. if offset != 0 { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } return nil, 0, xfer.DoNotRetry{Err: err} } progress.Update(progressOutput, ld.ID(), "Download complete") logrus.Debugf("Downloaded %s to tempfile %s", ld.ID(), tmpFile.Name()) _, err = tmpFile.Seek(0, io.SeekStart) if err != nil { tmpFile.Close() if err := os.Remove(tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name()) } ld.tmpFile = nil ld.verifier = nil return nil, 0, xfer.DoNotRetry{Err: err} } // hand off the temporary file to the download manager, so it will only // be closed once ld.tmpFile = nil return ioutils.NewReadCloserWrapper(tmpFile, func() error { tmpFile.Close() err := os.RemoveAll(tmpFile.Name()) if err != nil { logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name()) } return err }), size, nil } func (ld *v2LayerDescriptor) Close() { if ld.tmpFile != nil { ld.tmpFile.Close() if err := os.RemoveAll(ld.tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", ld.tmpFile.Name()) } } } func (ld *v2LayerDescriptor) truncateDownloadFile() error { // Need a new hash context since we will be redoing the download ld.verifier = nil if _, err := ld.tmpFile.Seek(0, io.SeekStart); err != nil { logrus.Errorf("error seeking to beginning of download file: %v", err) return err } if err := ld.tmpFile.Truncate(0); err != nil { logrus.Errorf("error truncating download file: %v", err) return err } return nil } func (ld *v2LayerDescriptor) Registered(diffID layer.DiffID) { // Cache mapping from this layer's DiffID to the blobsum ld.V2MetadataService.Add(diffID, metadata.V2Metadata{Digest: ld.digest, SourceRepository: ld.repoInfo.Name.Name()}) } func (p *v2Puller) pullV2Tag(ctx context.Context, ref reference.Named, platform *specs.Platform) (tagUpdated bool, err error) { var ( tagOrDigest string // Used for logging/progress only dgst digest.Digest mt string size int64 tagged reference.NamedTagged isTagged bool ) if digested, isDigested := ref.(reference.Canonical); isDigested { dgst = digested.Digest() tagOrDigest = digested.String() } else if tagged, isTagged = ref.(reference.NamedTagged); isTagged { tagService := p.repo.Tags(ctx) desc, err := tagService.Get(ctx, tagged.Tag()) if err != nil { return false, err } dgst = desc.Digest tagOrDigest = tagged.Tag() mt = desc.MediaType size = desc.Size } else { return false, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", reference.FamiliarString(ref)) } ctx = log.WithLogger(ctx, logrus.WithFields( logrus.Fields{ "digest": dgst, "remote": ref, })) desc := specs.Descriptor{ MediaType: mt, Digest: dgst, Size: size, } manifest, err := p.manifestStore.Get(ctx, desc) if err != nil { if isTagged && isNotFound(errors.Cause(err)) { logrus.WithField("ref", ref).WithError(err).Debug("Falling back to pull manifest by tag") msg := `%s Failed to pull manifest by the resolved digest. This registry does not appear to conform to the distribution registry specification; falling back to pull by tag. This fallback is DEPRECATED, and will be removed in a future release. Please contact admins of %s. %s ` warnEmoji := "\U000026A0\U0000FE0F" progress.Messagef(p.config.ProgressOutput, "WARNING", msg, warnEmoji, p.endpoint.URL, warnEmoji) // Fetch by tag worked, but fetch by digest didn't. // This is a broken registry implementation. // We'll fallback to the old behavior and get the manifest by tag. var ms distribution.ManifestService ms, err = p.repo.Manifests(ctx) if err != nil { return false, err } manifest, err = ms.Get(ctx, "", distribution.WithTag(tagged.Tag())) err = errors.Wrap(err, "error after falling back to get manifest by tag") } if err != nil { return false, err } } if manifest == nil { return false, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } if m, ok := manifest.(*schema2.DeserializedManifest); ok { var allowedMediatype bool for _, t := range p.config.Schema2Types { if m.Manifest.Config.MediaType == t { allowedMediatype = true break } } if !allowedMediatype { configClass := mediaTypeClasses[m.Manifest.Config.MediaType] if configClass == "" { configClass = "unknown" } return false, invalidManifestClassError{m.Manifest.Config.MediaType, configClass} } } logrus.Debugf("Pulling ref from V2 registry: %s", reference.FamiliarString(ref)) progress.Message(p.config.ProgressOutput, tagOrDigest, "Pulling from "+reference.FamiliarName(p.repo.Named())) var ( id digest.Digest manifestDigest digest.Digest ) switch v := manifest.(type) { case *schema1.SignedManifest: if p.config.RequireSchema2 { return false, fmt.Errorf("invalid manifest: not schema2") } // give registries time to upgrade to schema2 and only warn if we know a registry has been upgraded long time ago // TODO: condition to be removed if reference.Domain(ref) == "docker.io" { msg := fmt.Sprintf("Image %s uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/", ref) logrus.Warn(msg) progress.Message(p.config.ProgressOutput, "", msg) } id, manifestDigest, err = p.pullSchema1(ctx, ref, v, platform) if err != nil { return false, err } case *schema2.DeserializedManifest: id, manifestDigest, err = p.pullSchema2(ctx, ref, v, platform) if err != nil { return false, err } case *ocischema.DeserializedManifest: id, manifestDigest, err = p.pullOCI(ctx, ref, v, platform) if err != nil { return false, err } case *manifestlist.DeserializedManifestList: id, manifestDigest, err = p.pullManifestList(ctx, ref, v, platform) if err != nil { return false, err } default: return false, invalidManifestFormatError{} } progress.Message(p.config.ProgressOutput, "", "Digest: "+manifestDigest.String()) if p.config.ReferenceStore != nil { oldTagID, err := p.config.ReferenceStore.Get(ref) if err == nil { if oldTagID == id { return false, addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id) } } else if err != refstore.ErrDoesNotExist { return false, err } if canonical, ok := ref.(reference.Canonical); ok { if err = p.config.ReferenceStore.AddDigest(canonical, id, true); err != nil { return false, err } } else { if err = addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id); err != nil { return false, err } if err = p.config.ReferenceStore.AddTag(ref, id, true); err != nil { return false, err } } } return true, nil } func (p *v2Puller) pullSchema1(ctx context.Context, ref reference.Reference, unverifiedManifest *schema1.SignedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { var verifiedManifest *schema1.Manifest verifiedManifest, err = verifySchema1Manifest(unverifiedManifest, ref) if err != nil { return "", "", err } rootFS := image.NewRootFS() // remove duplicate layers and check parent chain validity err = fixManifestLayers(verifiedManifest) if err != nil { return "", "", err } var descriptors []xfer.DownloadDescriptor // Image history converted to the new format var history []image.History // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for i := len(verifiedManifest.FSLayers) - 1; i >= 0; i-- { blobSum := verifiedManifest.FSLayers[i].BlobSum if err = blobSum.Validate(); err != nil { return "", "", errors.Wrapf(err, "could not validate layer digest %q", blobSum) } var throwAway struct { ThrowAway bool `json:"throwaway,omitempty"` } if err := json.Unmarshal([]byte(verifiedManifest.History[i].V1Compatibility), &throwAway); err != nil { return "", "", err } h, err := v1.HistoryFromConfig([]byte(verifiedManifest.History[i].V1Compatibility), throwAway.ThrowAway) if err != nil { return "", "", err } history = append(history, h) if throwAway.ThrowAway { continue } layerDescriptor := &v2LayerDescriptor{ digest: blobSum, repoInfo: p.repoInfo, repo: p.repo, V2MetadataService: p.V2MetadataService, } descriptors = append(descriptors, layerDescriptor) } // The v1 manifest itself doesn't directly contain an OS. However, // the history does, but unfortunately that's a string, so search through // all the history until hopefully we find one which indicates the OS. // supertest2014/nyan is an example of a registry image with schemav1. configOS := runtime.GOOS if system.LCOWSupported() { type config struct { Os string `json:"os,omitempty"` } for _, v := range verifiedManifest.History { var c config if err := json.Unmarshal([]byte(v.V1Compatibility), &c); err == nil { if c.Os != "" { configOS = c.Os break } } } } // In the situation that the API call didn't specify an OS explicitly, but // we support the operating system, switch to that operating system. // eg FROM supertest2014/nyan with no platform specifier, and docker build // with no --platform= flag under LCOW. requestedOS := "" if platform != nil { requestedOS = platform.OS } else if system.IsOSSupported(configOS) { requestedOS = configOS } // Early bath if the requested OS doesn't match that of the configuration. // This avoids doing the download, only to potentially fail later. if !strings.EqualFold(configOS, requestedOS) { return "", "", fmt.Errorf("cannot download image with operating system %q when requesting %q", configOS, requestedOS) } resultRootFS, release, err := p.config.DownloadManager.Download(ctx, *rootFS, configOS, descriptors, p.config.ProgressOutput) if err != nil { return "", "", err } defer release() config, err := v1.MakeConfigFromV1Config([]byte(verifiedManifest.History[0].V1Compatibility), &resultRootFS, history) if err != nil { return "", "", err } imageID, err := p.config.ImageStore.Put(ctx, config) if err != nil { return "", "", err } manifestDigest = digest.FromBytes(unverifiedManifest.Canonical) return imageID, manifestDigest, nil } func (p *v2Puller) pullSchema2Layers(ctx context.Context, target distribution.Descriptor, layers []distribution.Descriptor, platform *specs.Platform) (id digest.Digest, err error) { if _, err := p.config.ImageStore.Get(ctx, target.Digest); err == nil { // If the image already exists locally, no need to pull // anything. return target.Digest, nil } var descriptors []xfer.DownloadDescriptor // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for _, d := range layers { if err := d.Digest.Validate(); err != nil { return "", errors.Wrapf(err, "could not validate layer digest %q", d.Digest) } layerDescriptor := &v2LayerDescriptor{ digest: d.Digest, repo: p.repo, repoInfo: p.repoInfo, V2MetadataService: p.V2MetadataService, src: d, } descriptors = append(descriptors, layerDescriptor) } configChan := make(chan []byte, 1) configErrChan := make(chan error, 1) layerErrChan := make(chan error, 1) downloadsDone := make(chan struct{}) var cancel func() ctx, cancel = context.WithCancel(ctx) defer cancel() // Pull the image config go func() { configJSON, err := p.pullSchema2Config(ctx, target.Digest) if err != nil { configErrChan <- ImageConfigPullError{Err: err} cancel() return } configChan <- configJSON }() var ( configJSON []byte // raw serialized image config downloadedRootFS *image.RootFS // rootFS from registered layers configRootFS *image.RootFS // rootFS from configuration release func() // release resources from rootFS download configPlatform *specs.Platform // for LCOW when registering downloaded layers ) layerStoreOS := runtime.GOOS if platform != nil { layerStoreOS = platform.OS } // https://github.com/docker/docker/issues/24766 - Err on the side of caution, // explicitly blocking images intended for linux from the Windows daemon. On // Windows, we do this before the attempt to download, effectively serialising // the download slightly slowing it down. We have to do it this way, as // chances are the download of layers itself would fail due to file names // which aren't suitable for NTFS. At some point in the future, if a similar // check to block Windows images being pulled on Linux is implemented, it // may be necessary to perform the same type of serialisation. if runtime.GOOS == "windows" { configJSON, configRootFS, configPlatform, err = receiveConfig(p.config.ImageStore, configChan, configErrChan) if err != nil { return "", err } if configRootFS == nil { return "", errRootFSInvalid } if err := checkImageCompatibility(configPlatform.OS, configPlatform.OSVersion); err != nil { return "", err } if len(descriptors) != len(configRootFS.DiffIDs) { return "", errRootFSMismatch } if platform == nil { // Early bath if the requested OS doesn't match that of the configuration. // This avoids doing the download, only to potentially fail later. if !system.IsOSSupported(configPlatform.OS) { return "", fmt.Errorf("cannot download image with operating system %q when requesting %q", configPlatform.OS, layerStoreOS) } layerStoreOS = configPlatform.OS } // Populate diff ids in descriptors to avoid downloading foreign layers // which have been side loaded for i := range descriptors { descriptors[i].(*v2LayerDescriptor).diffID = configRootFS.DiffIDs[i] } } if p.config.DownloadManager != nil { go func() { var ( err error rootFS image.RootFS ) downloadRootFS := *image.NewRootFS() rootFS, release, err = p.config.DownloadManager.Download(ctx, downloadRootFS, layerStoreOS, descriptors, p.config.ProgressOutput) if err != nil { // Intentionally do not cancel the config download here // as the error from config download (if there is one) // is more interesting than the layer download error layerErrChan <- err return } downloadedRootFS = &rootFS close(downloadsDone) }() } else { // We have nothing to download close(downloadsDone) } if configJSON == nil { configJSON, configRootFS, _, err = receiveConfig(p.config.ImageStore, configChan, configErrChan) if err == nil && configRootFS == nil { err = errRootFSInvalid } if err != nil { cancel() select { case <-downloadsDone: case <-layerErrChan: } return "", err } } select { case <-downloadsDone: case err = <-layerErrChan: return "", err } if release != nil { defer release() } if downloadedRootFS != nil { // The DiffIDs returned in rootFS MUST match those in the config. // Otherwise the image config could be referencing layers that aren't // included in the manifest. if len(downloadedRootFS.DiffIDs) != len(configRootFS.DiffIDs) { return "", errRootFSMismatch } for i := range downloadedRootFS.DiffIDs { if downloadedRootFS.DiffIDs[i] != configRootFS.DiffIDs[i] { return "", errRootFSMismatch } } } imageID, err := p.config.ImageStore.Put(ctx, configJSON) if err != nil { return "", err } return imageID, nil } func (p *v2Puller) pullSchema2(ctx context.Context, ref reference.Named, mfst *schema2.DeserializedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { manifestDigest, err = schema2ManifestDigest(ref, mfst) if err != nil { return "", "", err } id, err = p.pullSchema2Layers(ctx, mfst.Target(), mfst.Layers, platform) return id, manifestDigest, err } func (p *v2Puller) pullOCI(ctx context.Context, ref reference.Named, mfst *ocischema.DeserializedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { manifestDigest, err = schema2ManifestDigest(ref, mfst) if err != nil { return "", "", err } id, err = p.pullSchema2Layers(ctx, mfst.Target(), mfst.Layers, platform) return id, manifestDigest, err } func receiveConfig(s ImageConfigStore, configChan <-chan []byte, errChan <-chan error) ([]byte, *image.RootFS, *specs.Platform, error) { select { case configJSON := <-configChan: rootfs, err := s.RootFSFromConfig(configJSON) if err != nil { return nil, nil, nil, err } platform, err := s.PlatformFromConfig(configJSON) if err != nil { return nil, nil, nil, err } return configJSON, rootfs, platform, nil case err := <-errChan: return nil, nil, nil, err // Don't need a case for ctx.Done in the select because cancellation // will trigger an error in p.pullSchema2ImageConfig. } } // pullManifestList handles "manifest lists" which point to various // platform-specific manifests. func (p *v2Puller) pullManifestList(ctx context.Context, ref reference.Named, mfstList *manifestlist.DeserializedManifestList, pp *specs.Platform) (id digest.Digest, manifestListDigest digest.Digest, err error) { manifestListDigest, err = schema2ManifestDigest(ref, mfstList) if err != nil { return "", "", err } var platform specs.Platform if pp != nil { platform = *pp } logrus.Debugf("%s resolved to a manifestList object with %d entries; looking for a %s/%s match", ref, len(mfstList.Manifests), platforms.Format(platform), runtime.GOARCH) manifestMatches := filterManifests(mfstList.Manifests, platform) if len(manifestMatches) == 0 { errMsg := fmt.Sprintf("no matching manifest for %s in the manifest list entries", formatPlatform(platform)) logrus.Debugf(errMsg) return "", "", errors.New(errMsg) } if len(manifestMatches) > 1 { logrus.Debugf("found multiple matches in manifest list, choosing best match %s", manifestMatches[0].Digest.String()) } match := manifestMatches[0] if err := checkImageCompatibility(match.Platform.OS, match.Platform.OSVersion); err != nil { return "", "", err } desc := specs.Descriptor{ Digest: match.Digest, Size: match.Size, MediaType: match.MediaType, } manifest, err := p.manifestStore.Get(ctx, desc) if err != nil { return "", "", err } manifestRef, err := reference.WithDigest(reference.TrimNamed(ref), match.Digest) if err != nil { return "", "", err } switch v := manifest.(type) { case *schema1.SignedManifest: msg := fmt.Sprintf("[DEPRECATION NOTICE] v2 schema1 manifests in manifest lists are not supported and will break in a future release. Suggest author of %s to upgrade to v2 schema2. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/", ref) logrus.Warn(msg) progress.Message(p.config.ProgressOutput, "", msg) platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullSchema1(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } case *schema2.DeserializedManifest: platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullSchema2(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } case *ocischema.DeserializedManifest: platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullOCI(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } default: return "", "", errors.New("unsupported manifest format") } return id, manifestListDigest, err } func (p *v2Puller) pullSchema2Config(ctx context.Context, dgst digest.Digest) (configJSON []byte, err error) { blobs := p.repo.Blobs(ctx) configJSON, err = blobs.Get(ctx, dgst) if err != nil { return nil, err } // Verify image config digest verifier := dgst.Verifier() if _, err := verifier.Write(configJSON); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image config verification failed for digest %s", dgst) logrus.Error(err) return nil, err } return configJSON, nil } // schema2ManifestDigest computes the manifest digest, and, if pulling by // digest, ensures that it matches the requested digest. func schema2ManifestDigest(ref reference.Named, mfst distribution.Manifest) (digest.Digest, error) { _, canonical, err := mfst.Payload() if err != nil { return "", err } // If pull by digest, then verify the manifest digest. if digested, isDigested := ref.(reference.Canonical); isDigested { verifier := digested.Digest().Verifier() if _, err := verifier.Write(canonical); err != nil { return "", err } if !verifier.Verified() { err := fmt.Errorf("manifest verification failed for digest %s", digested.Digest()) logrus.Error(err) return "", err } return digested.Digest(), nil } return digest.FromBytes(canonical), nil } func verifySchema1Manifest(signedManifest *schema1.SignedManifest, ref reference.Reference) (m *schema1.Manifest, err error) { // If pull by digest, then verify the manifest digest. NOTE: It is // important to do this first, before any other content validation. If the // digest cannot be verified, don't even bother with those other things. if digested, isCanonical := ref.(reference.Canonical); isCanonical { verifier := digested.Digest().Verifier() if _, err := verifier.Write(signedManifest.Canonical); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image verification failed for digest %s", digested.Digest()) logrus.Error(err) return nil, err } } m = &signedManifest.Manifest if m.SchemaVersion != 1 { return nil, fmt.Errorf("unsupported schema version %d for %q", m.SchemaVersion, reference.FamiliarString(ref)) } if len(m.FSLayers) != len(m.History) { return nil, fmt.Errorf("length of history not equal to number of layers for %q", reference.FamiliarString(ref)) } if len(m.FSLayers) == 0 { return nil, fmt.Errorf("no FSLayers in manifest for %q", reference.FamiliarString(ref)) } return m, nil } // fixManifestLayers removes repeated layers from the manifest and checks the // correctness of the parent chain. func fixManifestLayers(m *schema1.Manifest) error { imgs := make([]*image.V1Image, len(m.FSLayers)) for i := range m.FSLayers { img := &image.V1Image{} if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), img); err != nil { return err } imgs[i] = img if err := v1.ValidateID(img.ID); err != nil { return err } } if imgs[len(imgs)-1].Parent != "" && runtime.GOOS != "windows" { // Windows base layer can point to a base layer parent that is not in manifest. return errors.New("invalid parent ID in the base layer of the image") } // check general duplicates to error instead of a deadlock idmap := make(map[string]struct{}) var lastID string for _, img := range imgs { // skip IDs that appear after each other, we handle those later if _, exists := idmap[img.ID]; img.ID != lastID && exists { return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID) } lastID = img.ID idmap[lastID] = struct{}{} } // backwards loop so that we keep the remaining indexes after removing items for i := len(imgs) - 2; i >= 0; i-- { if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue m.FSLayers = append(m.FSLayers[:i], m.FSLayers[i+1:]...) m.History = append(m.History[:i], m.History[i+1:]...) } else if imgs[i].Parent != imgs[i+1].ID { return fmt.Errorf("invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent) } } return nil } func createDownloadFile() (*os.File, error) { return ioutil.TempFile("", "GetImageBlob") } func toOCIPlatform(p manifestlist.PlatformSpec) specs.Platform { return specs.Platform{ OS: p.OS, Architecture: p.Architecture, Variant: p.Variant, OSFeatures: p.OSFeatures, OSVersion: p.OSVersion, } }
package distribution // import "github.com/docker/docker/distribution" import ( "context" "encoding/json" "fmt" "io" "io/ioutil" "os" "runtime" "github.com/containerd/containerd/log" "github.com/containerd/containerd/platforms" "github.com/docker/distribution" "github.com/docker/distribution/manifest/manifestlist" "github.com/docker/distribution/manifest/ocischema" "github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/reference" "github.com/docker/distribution/registry/client/transport" "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/distribution/xfer" "github.com/docker/docker/image" v1 "github.com/docker/docker/image/v1" "github.com/docker/docker/layer" "github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/progress" "github.com/docker/docker/pkg/stringid" "github.com/docker/docker/pkg/system" refstore "github.com/docker/docker/reference" "github.com/docker/docker/registry" digest "github.com/opencontainers/go-digest" specs "github.com/opencontainers/image-spec/specs-go/v1" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) var ( errRootFSMismatch = errors.New("layers from manifest don't match image configuration") errRootFSInvalid = errors.New("invalid rootfs in image configuration") ) // ImageConfigPullError is an error pulling the image config blob // (only applies to schema2). type ImageConfigPullError struct { Err error } // Error returns the error string for ImageConfigPullError. func (e ImageConfigPullError) Error() string { return "error pulling image configuration: " + e.Err.Error() } type v2Puller struct { V2MetadataService metadata.V2MetadataService endpoint registry.APIEndpoint config *ImagePullConfig repoInfo *registry.RepositoryInfo repo distribution.Repository manifestStore *manifestStore } func (p *v2Puller) Pull(ctx context.Context, ref reference.Named, platform *specs.Platform) (err error) { // TODO(tiborvass): was ReceiveTimeout p.repo, err = NewV2Repository(ctx, p.repoInfo, p.endpoint, p.config.MetaHeaders, p.config.AuthConfig, "pull") if err != nil { logrus.Warnf("Error getting v2 registry: %v", err) return err } p.manifestStore.remote, err = p.repo.Manifests(ctx) if err != nil { return err } if err = p.pullV2Repository(ctx, ref, platform); err != nil { if _, ok := err.(fallbackError); ok { return err } if continueOnError(err, p.endpoint.Mirror) { return fallbackError{ err: err, transportOK: true, } } } return err } func (p *v2Puller) pullV2Repository(ctx context.Context, ref reference.Named, platform *specs.Platform) (err error) { var layersDownloaded bool if !reference.IsNameOnly(ref) { layersDownloaded, err = p.pullV2Tag(ctx, ref, platform) if err != nil { return err } } else { tags, err := p.repo.Tags(ctx).All(ctx) if err != nil { return err } for _, tag := range tags { tagRef, err := reference.WithTag(ref, tag) if err != nil { return err } pulledNew, err := p.pullV2Tag(ctx, tagRef, platform) if err != nil { // Since this is the pull-all-tags case, don't // allow an error pulling a particular tag to // make the whole pull fall back to v1. if fallbackErr, ok := err.(fallbackError); ok { return fallbackErr.err } return err } // pulledNew is true if either new layers were downloaded OR if existing images were newly tagged // TODO(tiborvass): should we change the name of `layersDownload`? What about message in WriteStatus? layersDownloaded = layersDownloaded || pulledNew } } writeStatus(reference.FamiliarString(ref), p.config.ProgressOutput, layersDownloaded) return nil } type v2LayerDescriptor struct { digest digest.Digest diffID layer.DiffID repoInfo *registry.RepositoryInfo repo distribution.Repository V2MetadataService metadata.V2MetadataService tmpFile *os.File verifier digest.Verifier src distribution.Descriptor } func (ld *v2LayerDescriptor) Key() string { return "v2:" + ld.digest.String() } func (ld *v2LayerDescriptor) ID() string { return stringid.TruncateID(ld.digest.String()) } func (ld *v2LayerDescriptor) DiffID() (layer.DiffID, error) { if ld.diffID != "" { return ld.diffID, nil } return ld.V2MetadataService.GetDiffID(ld.digest) } func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progress.Output) (io.ReadCloser, int64, error) { logrus.Debugf("pulling blob %q", ld.digest) var ( err error offset int64 ) if ld.tmpFile == nil { ld.tmpFile, err = createDownloadFile() if err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } else { offset, err = ld.tmpFile.Seek(0, io.SeekEnd) if err != nil { logrus.Debugf("error seeking to end of download file: %v", err) offset = 0 ld.tmpFile.Close() if err := os.Remove(ld.tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", ld.tmpFile.Name()) } ld.tmpFile, err = createDownloadFile() if err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } else if offset != 0 { logrus.Debugf("attempting to resume download of %q from %d bytes", ld.digest, offset) } } tmpFile := ld.tmpFile layerDownload, err := ld.open(ctx) if err != nil { logrus.Errorf("Error initiating layer download: %v", err) return nil, 0, retryOnError(err) } if offset != 0 { _, err := layerDownload.Seek(offset, io.SeekStart) if err != nil { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } } size, err := layerDownload.Seek(0, io.SeekEnd) if err != nil { // Seek failed, perhaps because there was no Content-Length // header. This shouldn't fail the download, because we can // still continue without a progress bar. size = 0 } else { if size != 0 && offset > size { logrus.Debug("Partial download is larger than full blob. Starting over") offset = 0 if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } // Restore the seek offset either at the beginning of the // stream, or just after the last byte we have from previous // attempts. _, err = layerDownload.Seek(offset, io.SeekStart) if err != nil { return nil, 0, err } } reader := progress.NewProgressReader(ioutils.NewCancelReadCloser(ctx, layerDownload), progressOutput, size-offset, ld.ID(), "Downloading") defer reader.Close() if ld.verifier == nil { ld.verifier = ld.digest.Verifier() } _, err = io.Copy(tmpFile, io.TeeReader(reader, ld.verifier)) if err != nil { if err == transport.ErrWrongCodeForByteRange { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } return nil, 0, retryOnError(err) } progress.Update(progressOutput, ld.ID(), "Verifying Checksum") if !ld.verifier.Verified() { err = fmt.Errorf("filesystem layer verification failed for digest %s", ld.digest) logrus.Error(err) // Allow a retry if this digest verification error happened // after a resumed download. if offset != 0 { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } return nil, 0, xfer.DoNotRetry{Err: err} } progress.Update(progressOutput, ld.ID(), "Download complete") logrus.Debugf("Downloaded %s to tempfile %s", ld.ID(), tmpFile.Name()) _, err = tmpFile.Seek(0, io.SeekStart) if err != nil { tmpFile.Close() if err := os.Remove(tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name()) } ld.tmpFile = nil ld.verifier = nil return nil, 0, xfer.DoNotRetry{Err: err} } // hand off the temporary file to the download manager, so it will only // be closed once ld.tmpFile = nil return ioutils.NewReadCloserWrapper(tmpFile, func() error { tmpFile.Close() err := os.RemoveAll(tmpFile.Name()) if err != nil { logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name()) } return err }), size, nil } func (ld *v2LayerDescriptor) Close() { if ld.tmpFile != nil { ld.tmpFile.Close() if err := os.RemoveAll(ld.tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", ld.tmpFile.Name()) } } } func (ld *v2LayerDescriptor) truncateDownloadFile() error { // Need a new hash context since we will be redoing the download ld.verifier = nil if _, err := ld.tmpFile.Seek(0, io.SeekStart); err != nil { logrus.Errorf("error seeking to beginning of download file: %v", err) return err } if err := ld.tmpFile.Truncate(0); err != nil { logrus.Errorf("error truncating download file: %v", err) return err } return nil } func (ld *v2LayerDescriptor) Registered(diffID layer.DiffID) { // Cache mapping from this layer's DiffID to the blobsum ld.V2MetadataService.Add(diffID, metadata.V2Metadata{Digest: ld.digest, SourceRepository: ld.repoInfo.Name.Name()}) } func (p *v2Puller) pullV2Tag(ctx context.Context, ref reference.Named, platform *specs.Platform) (tagUpdated bool, err error) { var ( tagOrDigest string // Used for logging/progress only dgst digest.Digest mt string size int64 tagged reference.NamedTagged isTagged bool ) if digested, isDigested := ref.(reference.Canonical); isDigested { dgst = digested.Digest() tagOrDigest = digested.String() } else if tagged, isTagged = ref.(reference.NamedTagged); isTagged { tagService := p.repo.Tags(ctx) desc, err := tagService.Get(ctx, tagged.Tag()) if err != nil { return false, err } dgst = desc.Digest tagOrDigest = tagged.Tag() mt = desc.MediaType size = desc.Size } else { return false, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", reference.FamiliarString(ref)) } ctx = log.WithLogger(ctx, logrus.WithFields( logrus.Fields{ "digest": dgst, "remote": ref, })) desc := specs.Descriptor{ MediaType: mt, Digest: dgst, Size: size, } manifest, err := p.manifestStore.Get(ctx, desc) if err != nil { if isTagged && isNotFound(errors.Cause(err)) { logrus.WithField("ref", ref).WithError(err).Debug("Falling back to pull manifest by tag") msg := `%s Failed to pull manifest by the resolved digest. This registry does not appear to conform to the distribution registry specification; falling back to pull by tag. This fallback is DEPRECATED, and will be removed in a future release. Please contact admins of %s. %s ` warnEmoji := "\U000026A0\U0000FE0F" progress.Messagef(p.config.ProgressOutput, "WARNING", msg, warnEmoji, p.endpoint.URL, warnEmoji) // Fetch by tag worked, but fetch by digest didn't. // This is a broken registry implementation. // We'll fallback to the old behavior and get the manifest by tag. var ms distribution.ManifestService ms, err = p.repo.Manifests(ctx) if err != nil { return false, err } manifest, err = ms.Get(ctx, "", distribution.WithTag(tagged.Tag())) err = errors.Wrap(err, "error after falling back to get manifest by tag") } if err != nil { return false, err } } if manifest == nil { return false, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } if m, ok := manifest.(*schema2.DeserializedManifest); ok { var allowedMediatype bool for _, t := range p.config.Schema2Types { if m.Manifest.Config.MediaType == t { allowedMediatype = true break } } if !allowedMediatype { configClass := mediaTypeClasses[m.Manifest.Config.MediaType] if configClass == "" { configClass = "unknown" } return false, invalidManifestClassError{m.Manifest.Config.MediaType, configClass} } } logrus.Debugf("Pulling ref from V2 registry: %s", reference.FamiliarString(ref)) progress.Message(p.config.ProgressOutput, tagOrDigest, "Pulling from "+reference.FamiliarName(p.repo.Named())) var ( id digest.Digest manifestDigest digest.Digest ) switch v := manifest.(type) { case *schema1.SignedManifest: if p.config.RequireSchema2 { return false, fmt.Errorf("invalid manifest: not schema2") } // give registries time to upgrade to schema2 and only warn if we know a registry has been upgraded long time ago // TODO: condition to be removed if reference.Domain(ref) == "docker.io" { msg := fmt.Sprintf("Image %s uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/", ref) logrus.Warn(msg) progress.Message(p.config.ProgressOutput, "", msg) } id, manifestDigest, err = p.pullSchema1(ctx, ref, v, platform) if err != nil { return false, err } case *schema2.DeserializedManifest: id, manifestDigest, err = p.pullSchema2(ctx, ref, v, platform) if err != nil { return false, err } case *ocischema.DeserializedManifest: id, manifestDigest, err = p.pullOCI(ctx, ref, v, platform) if err != nil { return false, err } case *manifestlist.DeserializedManifestList: id, manifestDigest, err = p.pullManifestList(ctx, ref, v, platform) if err != nil { return false, err } default: return false, invalidManifestFormatError{} } progress.Message(p.config.ProgressOutput, "", "Digest: "+manifestDigest.String()) if p.config.ReferenceStore != nil { oldTagID, err := p.config.ReferenceStore.Get(ref) if err == nil { if oldTagID == id { return false, addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id) } } else if err != refstore.ErrDoesNotExist { return false, err } if canonical, ok := ref.(reference.Canonical); ok { if err = p.config.ReferenceStore.AddDigest(canonical, id, true); err != nil { return false, err } } else { if err = addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id); err != nil { return false, err } if err = p.config.ReferenceStore.AddTag(ref, id, true); err != nil { return false, err } } } return true, nil } func (p *v2Puller) pullSchema1(ctx context.Context, ref reference.Reference, unverifiedManifest *schema1.SignedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { if platform != nil { // Early bath if the requested OS doesn't match that of the configuration. // This avoids doing the download, only to potentially fail later. if !system.IsOSSupported(platform.OS) { return "", "", fmt.Errorf("cannot download image with operating system %q when requesting %q", runtime.GOOS, platform.OS) } } var verifiedManifest *schema1.Manifest verifiedManifest, err = verifySchema1Manifest(unverifiedManifest, ref) if err != nil { return "", "", err } rootFS := image.NewRootFS() // remove duplicate layers and check parent chain validity err = fixManifestLayers(verifiedManifest) if err != nil { return "", "", err } var descriptors []xfer.DownloadDescriptor // Image history converted to the new format var history []image.History // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for i := len(verifiedManifest.FSLayers) - 1; i >= 0; i-- { blobSum := verifiedManifest.FSLayers[i].BlobSum if err = blobSum.Validate(); err != nil { return "", "", errors.Wrapf(err, "could not validate layer digest %q", blobSum) } var throwAway struct { ThrowAway bool `json:"throwaway,omitempty"` } if err := json.Unmarshal([]byte(verifiedManifest.History[i].V1Compatibility), &throwAway); err != nil { return "", "", err } h, err := v1.HistoryFromConfig([]byte(verifiedManifest.History[i].V1Compatibility), throwAway.ThrowAway) if err != nil { return "", "", err } history = append(history, h) if throwAway.ThrowAway { continue } layerDescriptor := &v2LayerDescriptor{ digest: blobSum, repoInfo: p.repoInfo, repo: p.repo, V2MetadataService: p.V2MetadataService, } descriptors = append(descriptors, layerDescriptor) } resultRootFS, release, err := p.config.DownloadManager.Download(ctx, *rootFS, runtime.GOOS, descriptors, p.config.ProgressOutput) if err != nil { return "", "", err } defer release() config, err := v1.MakeConfigFromV1Config([]byte(verifiedManifest.History[0].V1Compatibility), &resultRootFS, history) if err != nil { return "", "", err } imageID, err := p.config.ImageStore.Put(ctx, config) if err != nil { return "", "", err } manifestDigest = digest.FromBytes(unverifiedManifest.Canonical) return imageID, manifestDigest, nil } func (p *v2Puller) pullSchema2Layers(ctx context.Context, target distribution.Descriptor, layers []distribution.Descriptor, platform *specs.Platform) (id digest.Digest, err error) { if _, err := p.config.ImageStore.Get(ctx, target.Digest); err == nil { // If the image already exists locally, no need to pull // anything. return target.Digest, nil } var descriptors []xfer.DownloadDescriptor // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for _, d := range layers { if err := d.Digest.Validate(); err != nil { return "", errors.Wrapf(err, "could not validate layer digest %q", d.Digest) } layerDescriptor := &v2LayerDescriptor{ digest: d.Digest, repo: p.repo, repoInfo: p.repoInfo, V2MetadataService: p.V2MetadataService, src: d, } descriptors = append(descriptors, layerDescriptor) } configChan := make(chan []byte, 1) configErrChan := make(chan error, 1) layerErrChan := make(chan error, 1) downloadsDone := make(chan struct{}) var cancel func() ctx, cancel = context.WithCancel(ctx) defer cancel() // Pull the image config go func() { configJSON, err := p.pullSchema2Config(ctx, target.Digest) if err != nil { configErrChan <- ImageConfigPullError{Err: err} cancel() return } configChan <- configJSON }() var ( configJSON []byte // raw serialized image config downloadedRootFS *image.RootFS // rootFS from registered layers configRootFS *image.RootFS // rootFS from configuration release func() // release resources from rootFS download configPlatform *specs.Platform // for LCOW when registering downloaded layers ) layerStoreOS := runtime.GOOS if platform != nil { layerStoreOS = platform.OS } // https://github.com/docker/docker/issues/24766 - Err on the side of caution, // explicitly blocking images intended for linux from the Windows daemon. On // Windows, we do this before the attempt to download, effectively serialising // the download slightly slowing it down. We have to do it this way, as // chances are the download of layers itself would fail due to file names // which aren't suitable for NTFS. At some point in the future, if a similar // check to block Windows images being pulled on Linux is implemented, it // may be necessary to perform the same type of serialisation. if runtime.GOOS == "windows" { configJSON, configRootFS, configPlatform, err = receiveConfig(p.config.ImageStore, configChan, configErrChan) if err != nil { return "", err } if configRootFS == nil { return "", errRootFSInvalid } if err := checkImageCompatibility(configPlatform.OS, configPlatform.OSVersion); err != nil { return "", err } if len(descriptors) != len(configRootFS.DiffIDs) { return "", errRootFSMismatch } if platform == nil { // Early bath if the requested OS doesn't match that of the configuration. // This avoids doing the download, only to potentially fail later. if !system.IsOSSupported(configPlatform.OS) { return "", fmt.Errorf("cannot download image with operating system %q when requesting %q", configPlatform.OS, layerStoreOS) } layerStoreOS = configPlatform.OS } // Populate diff ids in descriptors to avoid downloading foreign layers // which have been side loaded for i := range descriptors { descriptors[i].(*v2LayerDescriptor).diffID = configRootFS.DiffIDs[i] } } if p.config.DownloadManager != nil { go func() { var ( err error rootFS image.RootFS ) downloadRootFS := *image.NewRootFS() rootFS, release, err = p.config.DownloadManager.Download(ctx, downloadRootFS, layerStoreOS, descriptors, p.config.ProgressOutput) if err != nil { // Intentionally do not cancel the config download here // as the error from config download (if there is one) // is more interesting than the layer download error layerErrChan <- err return } downloadedRootFS = &rootFS close(downloadsDone) }() } else { // We have nothing to download close(downloadsDone) } if configJSON == nil { configJSON, configRootFS, _, err = receiveConfig(p.config.ImageStore, configChan, configErrChan) if err == nil && configRootFS == nil { err = errRootFSInvalid } if err != nil { cancel() select { case <-downloadsDone: case <-layerErrChan: } return "", err } } select { case <-downloadsDone: case err = <-layerErrChan: return "", err } if release != nil { defer release() } if downloadedRootFS != nil { // The DiffIDs returned in rootFS MUST match those in the config. // Otherwise the image config could be referencing layers that aren't // included in the manifest. if len(downloadedRootFS.DiffIDs) != len(configRootFS.DiffIDs) { return "", errRootFSMismatch } for i := range downloadedRootFS.DiffIDs { if downloadedRootFS.DiffIDs[i] != configRootFS.DiffIDs[i] { return "", errRootFSMismatch } } } imageID, err := p.config.ImageStore.Put(ctx, configJSON) if err != nil { return "", err } return imageID, nil } func (p *v2Puller) pullSchema2(ctx context.Context, ref reference.Named, mfst *schema2.DeserializedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { manifestDigest, err = schema2ManifestDigest(ref, mfst) if err != nil { return "", "", err } id, err = p.pullSchema2Layers(ctx, mfst.Target(), mfst.Layers, platform) return id, manifestDigest, err } func (p *v2Puller) pullOCI(ctx context.Context, ref reference.Named, mfst *ocischema.DeserializedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { manifestDigest, err = schema2ManifestDigest(ref, mfst) if err != nil { return "", "", err } id, err = p.pullSchema2Layers(ctx, mfst.Target(), mfst.Layers, platform) return id, manifestDigest, err } func receiveConfig(s ImageConfigStore, configChan <-chan []byte, errChan <-chan error) ([]byte, *image.RootFS, *specs.Platform, error) { select { case configJSON := <-configChan: rootfs, err := s.RootFSFromConfig(configJSON) if err != nil { return nil, nil, nil, err } platform, err := s.PlatformFromConfig(configJSON) if err != nil { return nil, nil, nil, err } return configJSON, rootfs, platform, nil case err := <-errChan: return nil, nil, nil, err // Don't need a case for ctx.Done in the select because cancellation // will trigger an error in p.pullSchema2ImageConfig. } } // pullManifestList handles "manifest lists" which point to various // platform-specific manifests. func (p *v2Puller) pullManifestList(ctx context.Context, ref reference.Named, mfstList *manifestlist.DeserializedManifestList, pp *specs.Platform) (id digest.Digest, manifestListDigest digest.Digest, err error) { manifestListDigest, err = schema2ManifestDigest(ref, mfstList) if err != nil { return "", "", err } var platform specs.Platform if pp != nil { platform = *pp } logrus.Debugf("%s resolved to a manifestList object with %d entries; looking for a %s/%s match", ref, len(mfstList.Manifests), platforms.Format(platform), runtime.GOARCH) manifestMatches := filterManifests(mfstList.Manifests, platform) if len(manifestMatches) == 0 { errMsg := fmt.Sprintf("no matching manifest for %s in the manifest list entries", formatPlatform(platform)) logrus.Debugf(errMsg) return "", "", errors.New(errMsg) } if len(manifestMatches) > 1 { logrus.Debugf("found multiple matches in manifest list, choosing best match %s", manifestMatches[0].Digest.String()) } match := manifestMatches[0] if err := checkImageCompatibility(match.Platform.OS, match.Platform.OSVersion); err != nil { return "", "", err } desc := specs.Descriptor{ Digest: match.Digest, Size: match.Size, MediaType: match.MediaType, } manifest, err := p.manifestStore.Get(ctx, desc) if err != nil { return "", "", err } manifestRef, err := reference.WithDigest(reference.TrimNamed(ref), match.Digest) if err != nil { return "", "", err } switch v := manifest.(type) { case *schema1.SignedManifest: msg := fmt.Sprintf("[DEPRECATION NOTICE] v2 schema1 manifests in manifest lists are not supported and will break in a future release. Suggest author of %s to upgrade to v2 schema2. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/", ref) logrus.Warn(msg) progress.Message(p.config.ProgressOutput, "", msg) platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullSchema1(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } case *schema2.DeserializedManifest: platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullSchema2(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } case *ocischema.DeserializedManifest: platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullOCI(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } default: return "", "", errors.New("unsupported manifest format") } return id, manifestListDigest, err } func (p *v2Puller) pullSchema2Config(ctx context.Context, dgst digest.Digest) (configJSON []byte, err error) { blobs := p.repo.Blobs(ctx) configJSON, err = blobs.Get(ctx, dgst) if err != nil { return nil, err } // Verify image config digest verifier := dgst.Verifier() if _, err := verifier.Write(configJSON); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image config verification failed for digest %s", dgst) logrus.Error(err) return nil, err } return configJSON, nil } // schema2ManifestDigest computes the manifest digest, and, if pulling by // digest, ensures that it matches the requested digest. func schema2ManifestDigest(ref reference.Named, mfst distribution.Manifest) (digest.Digest, error) { _, canonical, err := mfst.Payload() if err != nil { return "", err } // If pull by digest, then verify the manifest digest. if digested, isDigested := ref.(reference.Canonical); isDigested { verifier := digested.Digest().Verifier() if _, err := verifier.Write(canonical); err != nil { return "", err } if !verifier.Verified() { err := fmt.Errorf("manifest verification failed for digest %s", digested.Digest()) logrus.Error(err) return "", err } return digested.Digest(), nil } return digest.FromBytes(canonical), nil } func verifySchema1Manifest(signedManifest *schema1.SignedManifest, ref reference.Reference) (m *schema1.Manifest, err error) { // If pull by digest, then verify the manifest digest. NOTE: It is // important to do this first, before any other content validation. If the // digest cannot be verified, don't even bother with those other things. if digested, isCanonical := ref.(reference.Canonical); isCanonical { verifier := digested.Digest().Verifier() if _, err := verifier.Write(signedManifest.Canonical); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image verification failed for digest %s", digested.Digest()) logrus.Error(err) return nil, err } } m = &signedManifest.Manifest if m.SchemaVersion != 1 { return nil, fmt.Errorf("unsupported schema version %d for %q", m.SchemaVersion, reference.FamiliarString(ref)) } if len(m.FSLayers) != len(m.History) { return nil, fmt.Errorf("length of history not equal to number of layers for %q", reference.FamiliarString(ref)) } if len(m.FSLayers) == 0 { return nil, fmt.Errorf("no FSLayers in manifest for %q", reference.FamiliarString(ref)) } return m, nil } // fixManifestLayers removes repeated layers from the manifest and checks the // correctness of the parent chain. func fixManifestLayers(m *schema1.Manifest) error { imgs := make([]*image.V1Image, len(m.FSLayers)) for i := range m.FSLayers { img := &image.V1Image{} if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), img); err != nil { return err } imgs[i] = img if err := v1.ValidateID(img.ID); err != nil { return err } } if imgs[len(imgs)-1].Parent != "" && runtime.GOOS != "windows" { // Windows base layer can point to a base layer parent that is not in manifest. return errors.New("invalid parent ID in the base layer of the image") } // check general duplicates to error instead of a deadlock idmap := make(map[string]struct{}) var lastID string for _, img := range imgs { // skip IDs that appear after each other, we handle those later if _, exists := idmap[img.ID]; img.ID != lastID && exists { return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID) } lastID = img.ID idmap[lastID] = struct{}{} } // backwards loop so that we keep the remaining indexes after removing items for i := len(imgs) - 2; i >= 0; i-- { if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue m.FSLayers = append(m.FSLayers[:i], m.FSLayers[i+1:]...) m.History = append(m.History[:i], m.History[i+1:]...) } else if imgs[i].Parent != imgs[i+1].ID { return fmt.Errorf("invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent) } } return nil } func createDownloadFile() (*os.File, error) { return ioutil.TempFile("", "GetImageBlob") } func toOCIPlatform(p manifestlist.PlatformSpec) specs.Platform { return specs.Platform{ OS: p.OS, Architecture: p.Architecture, Variant: p.Variant, OSFeatures: p.OSFeatures, OSVersion: p.OSVersion, } }
thaJeztah
3ad9549e70bdf45b40c6332b221cd5c7fd635524
51b06c6795160d8a1ba05d05d6491df7588b2957
`Early bath`?
cpuguy83
4,516
moby/moby
42,683
Remove LCOW (step 6)
Splitting off more bits from https://github.com/moby/moby/pull/42170
null
2021-07-27 11:33:51+00:00
2021-07-29 18:34:29+00:00
distribution/pull_v2.go
package distribution // import "github.com/docker/docker/distribution" import ( "context" "encoding/json" "fmt" "io" "io/ioutil" "os" "runtime" "strings" "github.com/containerd/containerd/log" "github.com/containerd/containerd/platforms" "github.com/docker/distribution" "github.com/docker/distribution/manifest/manifestlist" "github.com/docker/distribution/manifest/ocischema" "github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/reference" "github.com/docker/distribution/registry/client/transport" "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/distribution/xfer" "github.com/docker/docker/image" v1 "github.com/docker/docker/image/v1" "github.com/docker/docker/layer" "github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/progress" "github.com/docker/docker/pkg/stringid" "github.com/docker/docker/pkg/system" refstore "github.com/docker/docker/reference" "github.com/docker/docker/registry" digest "github.com/opencontainers/go-digest" specs "github.com/opencontainers/image-spec/specs-go/v1" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) var ( errRootFSMismatch = errors.New("layers from manifest don't match image configuration") errRootFSInvalid = errors.New("invalid rootfs in image configuration") ) // ImageConfigPullError is an error pulling the image config blob // (only applies to schema2). type ImageConfigPullError struct { Err error } // Error returns the error string for ImageConfigPullError. func (e ImageConfigPullError) Error() string { return "error pulling image configuration: " + e.Err.Error() } type v2Puller struct { V2MetadataService metadata.V2MetadataService endpoint registry.APIEndpoint config *ImagePullConfig repoInfo *registry.RepositoryInfo repo distribution.Repository manifestStore *manifestStore } func (p *v2Puller) Pull(ctx context.Context, ref reference.Named, platform *specs.Platform) (err error) { // TODO(tiborvass): was ReceiveTimeout p.repo, err = NewV2Repository(ctx, p.repoInfo, p.endpoint, p.config.MetaHeaders, p.config.AuthConfig, "pull") if err != nil { logrus.Warnf("Error getting v2 registry: %v", err) return err } p.manifestStore.remote, err = p.repo.Manifests(ctx) if err != nil { return err } if err = p.pullV2Repository(ctx, ref, platform); err != nil { if _, ok := err.(fallbackError); ok { return err } if continueOnError(err, p.endpoint.Mirror) { return fallbackError{ err: err, transportOK: true, } } } return err } func (p *v2Puller) pullV2Repository(ctx context.Context, ref reference.Named, platform *specs.Platform) (err error) { var layersDownloaded bool if !reference.IsNameOnly(ref) { layersDownloaded, err = p.pullV2Tag(ctx, ref, platform) if err != nil { return err } } else { tags, err := p.repo.Tags(ctx).All(ctx) if err != nil { return err } for _, tag := range tags { tagRef, err := reference.WithTag(ref, tag) if err != nil { return err } pulledNew, err := p.pullV2Tag(ctx, tagRef, platform) if err != nil { // Since this is the pull-all-tags case, don't // allow an error pulling a particular tag to // make the whole pull fall back to v1. if fallbackErr, ok := err.(fallbackError); ok { return fallbackErr.err } return err } // pulledNew is true if either new layers were downloaded OR if existing images were newly tagged // TODO(tiborvass): should we change the name of `layersDownload`? What about message in WriteStatus? layersDownloaded = layersDownloaded || pulledNew } } writeStatus(reference.FamiliarString(ref), p.config.ProgressOutput, layersDownloaded) return nil } type v2LayerDescriptor struct { digest digest.Digest diffID layer.DiffID repoInfo *registry.RepositoryInfo repo distribution.Repository V2MetadataService metadata.V2MetadataService tmpFile *os.File verifier digest.Verifier src distribution.Descriptor } func (ld *v2LayerDescriptor) Key() string { return "v2:" + ld.digest.String() } func (ld *v2LayerDescriptor) ID() string { return stringid.TruncateID(ld.digest.String()) } func (ld *v2LayerDescriptor) DiffID() (layer.DiffID, error) { if ld.diffID != "" { return ld.diffID, nil } return ld.V2MetadataService.GetDiffID(ld.digest) } func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progress.Output) (io.ReadCloser, int64, error) { logrus.Debugf("pulling blob %q", ld.digest) var ( err error offset int64 ) if ld.tmpFile == nil { ld.tmpFile, err = createDownloadFile() if err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } else { offset, err = ld.tmpFile.Seek(0, io.SeekEnd) if err != nil { logrus.Debugf("error seeking to end of download file: %v", err) offset = 0 ld.tmpFile.Close() if err := os.Remove(ld.tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", ld.tmpFile.Name()) } ld.tmpFile, err = createDownloadFile() if err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } else if offset != 0 { logrus.Debugf("attempting to resume download of %q from %d bytes", ld.digest, offset) } } tmpFile := ld.tmpFile layerDownload, err := ld.open(ctx) if err != nil { logrus.Errorf("Error initiating layer download: %v", err) return nil, 0, retryOnError(err) } if offset != 0 { _, err := layerDownload.Seek(offset, io.SeekStart) if err != nil { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } } size, err := layerDownload.Seek(0, io.SeekEnd) if err != nil { // Seek failed, perhaps because there was no Content-Length // header. This shouldn't fail the download, because we can // still continue without a progress bar. size = 0 } else { if size != 0 && offset > size { logrus.Debug("Partial download is larger than full blob. Starting over") offset = 0 if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } // Restore the seek offset either at the beginning of the // stream, or just after the last byte we have from previous // attempts. _, err = layerDownload.Seek(offset, io.SeekStart) if err != nil { return nil, 0, err } } reader := progress.NewProgressReader(ioutils.NewCancelReadCloser(ctx, layerDownload), progressOutput, size-offset, ld.ID(), "Downloading") defer reader.Close() if ld.verifier == nil { ld.verifier = ld.digest.Verifier() } _, err = io.Copy(tmpFile, io.TeeReader(reader, ld.verifier)) if err != nil { if err == transport.ErrWrongCodeForByteRange { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } return nil, 0, retryOnError(err) } progress.Update(progressOutput, ld.ID(), "Verifying Checksum") if !ld.verifier.Verified() { err = fmt.Errorf("filesystem layer verification failed for digest %s", ld.digest) logrus.Error(err) // Allow a retry if this digest verification error happened // after a resumed download. if offset != 0 { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } return nil, 0, xfer.DoNotRetry{Err: err} } progress.Update(progressOutput, ld.ID(), "Download complete") logrus.Debugf("Downloaded %s to tempfile %s", ld.ID(), tmpFile.Name()) _, err = tmpFile.Seek(0, io.SeekStart) if err != nil { tmpFile.Close() if err := os.Remove(tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name()) } ld.tmpFile = nil ld.verifier = nil return nil, 0, xfer.DoNotRetry{Err: err} } // hand off the temporary file to the download manager, so it will only // be closed once ld.tmpFile = nil return ioutils.NewReadCloserWrapper(tmpFile, func() error { tmpFile.Close() err := os.RemoveAll(tmpFile.Name()) if err != nil { logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name()) } return err }), size, nil } func (ld *v2LayerDescriptor) Close() { if ld.tmpFile != nil { ld.tmpFile.Close() if err := os.RemoveAll(ld.tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", ld.tmpFile.Name()) } } } func (ld *v2LayerDescriptor) truncateDownloadFile() error { // Need a new hash context since we will be redoing the download ld.verifier = nil if _, err := ld.tmpFile.Seek(0, io.SeekStart); err != nil { logrus.Errorf("error seeking to beginning of download file: %v", err) return err } if err := ld.tmpFile.Truncate(0); err != nil { logrus.Errorf("error truncating download file: %v", err) return err } return nil } func (ld *v2LayerDescriptor) Registered(diffID layer.DiffID) { // Cache mapping from this layer's DiffID to the blobsum ld.V2MetadataService.Add(diffID, metadata.V2Metadata{Digest: ld.digest, SourceRepository: ld.repoInfo.Name.Name()}) } func (p *v2Puller) pullV2Tag(ctx context.Context, ref reference.Named, platform *specs.Platform) (tagUpdated bool, err error) { var ( tagOrDigest string // Used for logging/progress only dgst digest.Digest mt string size int64 tagged reference.NamedTagged isTagged bool ) if digested, isDigested := ref.(reference.Canonical); isDigested { dgst = digested.Digest() tagOrDigest = digested.String() } else if tagged, isTagged = ref.(reference.NamedTagged); isTagged { tagService := p.repo.Tags(ctx) desc, err := tagService.Get(ctx, tagged.Tag()) if err != nil { return false, err } dgst = desc.Digest tagOrDigest = tagged.Tag() mt = desc.MediaType size = desc.Size } else { return false, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", reference.FamiliarString(ref)) } ctx = log.WithLogger(ctx, logrus.WithFields( logrus.Fields{ "digest": dgst, "remote": ref, })) desc := specs.Descriptor{ MediaType: mt, Digest: dgst, Size: size, } manifest, err := p.manifestStore.Get(ctx, desc) if err != nil { if isTagged && isNotFound(errors.Cause(err)) { logrus.WithField("ref", ref).WithError(err).Debug("Falling back to pull manifest by tag") msg := `%s Failed to pull manifest by the resolved digest. This registry does not appear to conform to the distribution registry specification; falling back to pull by tag. This fallback is DEPRECATED, and will be removed in a future release. Please contact admins of %s. %s ` warnEmoji := "\U000026A0\U0000FE0F" progress.Messagef(p.config.ProgressOutput, "WARNING", msg, warnEmoji, p.endpoint.URL, warnEmoji) // Fetch by tag worked, but fetch by digest didn't. // This is a broken registry implementation. // We'll fallback to the old behavior and get the manifest by tag. var ms distribution.ManifestService ms, err = p.repo.Manifests(ctx) if err != nil { return false, err } manifest, err = ms.Get(ctx, "", distribution.WithTag(tagged.Tag())) err = errors.Wrap(err, "error after falling back to get manifest by tag") } if err != nil { return false, err } } if manifest == nil { return false, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } if m, ok := manifest.(*schema2.DeserializedManifest); ok { var allowedMediatype bool for _, t := range p.config.Schema2Types { if m.Manifest.Config.MediaType == t { allowedMediatype = true break } } if !allowedMediatype { configClass := mediaTypeClasses[m.Manifest.Config.MediaType] if configClass == "" { configClass = "unknown" } return false, invalidManifestClassError{m.Manifest.Config.MediaType, configClass} } } logrus.Debugf("Pulling ref from V2 registry: %s", reference.FamiliarString(ref)) progress.Message(p.config.ProgressOutput, tagOrDigest, "Pulling from "+reference.FamiliarName(p.repo.Named())) var ( id digest.Digest manifestDigest digest.Digest ) switch v := manifest.(type) { case *schema1.SignedManifest: if p.config.RequireSchema2 { return false, fmt.Errorf("invalid manifest: not schema2") } // give registries time to upgrade to schema2 and only warn if we know a registry has been upgraded long time ago // TODO: condition to be removed if reference.Domain(ref) == "docker.io" { msg := fmt.Sprintf("Image %s uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/", ref) logrus.Warn(msg) progress.Message(p.config.ProgressOutput, "", msg) } id, manifestDigest, err = p.pullSchema1(ctx, ref, v, platform) if err != nil { return false, err } case *schema2.DeserializedManifest: id, manifestDigest, err = p.pullSchema2(ctx, ref, v, platform) if err != nil { return false, err } case *ocischema.DeserializedManifest: id, manifestDigest, err = p.pullOCI(ctx, ref, v, platform) if err != nil { return false, err } case *manifestlist.DeserializedManifestList: id, manifestDigest, err = p.pullManifestList(ctx, ref, v, platform) if err != nil { return false, err } default: return false, invalidManifestFormatError{} } progress.Message(p.config.ProgressOutput, "", "Digest: "+manifestDigest.String()) if p.config.ReferenceStore != nil { oldTagID, err := p.config.ReferenceStore.Get(ref) if err == nil { if oldTagID == id { return false, addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id) } } else if err != refstore.ErrDoesNotExist { return false, err } if canonical, ok := ref.(reference.Canonical); ok { if err = p.config.ReferenceStore.AddDigest(canonical, id, true); err != nil { return false, err } } else { if err = addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id); err != nil { return false, err } if err = p.config.ReferenceStore.AddTag(ref, id, true); err != nil { return false, err } } } return true, nil } func (p *v2Puller) pullSchema1(ctx context.Context, ref reference.Reference, unverifiedManifest *schema1.SignedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { var verifiedManifest *schema1.Manifest verifiedManifest, err = verifySchema1Manifest(unverifiedManifest, ref) if err != nil { return "", "", err } rootFS := image.NewRootFS() // remove duplicate layers and check parent chain validity err = fixManifestLayers(verifiedManifest) if err != nil { return "", "", err } var descriptors []xfer.DownloadDescriptor // Image history converted to the new format var history []image.History // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for i := len(verifiedManifest.FSLayers) - 1; i >= 0; i-- { blobSum := verifiedManifest.FSLayers[i].BlobSum if err = blobSum.Validate(); err != nil { return "", "", errors.Wrapf(err, "could not validate layer digest %q", blobSum) } var throwAway struct { ThrowAway bool `json:"throwaway,omitempty"` } if err := json.Unmarshal([]byte(verifiedManifest.History[i].V1Compatibility), &throwAway); err != nil { return "", "", err } h, err := v1.HistoryFromConfig([]byte(verifiedManifest.History[i].V1Compatibility), throwAway.ThrowAway) if err != nil { return "", "", err } history = append(history, h) if throwAway.ThrowAway { continue } layerDescriptor := &v2LayerDescriptor{ digest: blobSum, repoInfo: p.repoInfo, repo: p.repo, V2MetadataService: p.V2MetadataService, } descriptors = append(descriptors, layerDescriptor) } // The v1 manifest itself doesn't directly contain an OS. However, // the history does, but unfortunately that's a string, so search through // all the history until hopefully we find one which indicates the OS. // supertest2014/nyan is an example of a registry image with schemav1. configOS := runtime.GOOS if system.LCOWSupported() { type config struct { Os string `json:"os,omitempty"` } for _, v := range verifiedManifest.History { var c config if err := json.Unmarshal([]byte(v.V1Compatibility), &c); err == nil { if c.Os != "" { configOS = c.Os break } } } } // In the situation that the API call didn't specify an OS explicitly, but // we support the operating system, switch to that operating system. // eg FROM supertest2014/nyan with no platform specifier, and docker build // with no --platform= flag under LCOW. requestedOS := "" if platform != nil { requestedOS = platform.OS } else if system.IsOSSupported(configOS) { requestedOS = configOS } // Early bath if the requested OS doesn't match that of the configuration. // This avoids doing the download, only to potentially fail later. if !strings.EqualFold(configOS, requestedOS) { return "", "", fmt.Errorf("cannot download image with operating system %q when requesting %q", configOS, requestedOS) } resultRootFS, release, err := p.config.DownloadManager.Download(ctx, *rootFS, configOS, descriptors, p.config.ProgressOutput) if err != nil { return "", "", err } defer release() config, err := v1.MakeConfigFromV1Config([]byte(verifiedManifest.History[0].V1Compatibility), &resultRootFS, history) if err != nil { return "", "", err } imageID, err := p.config.ImageStore.Put(ctx, config) if err != nil { return "", "", err } manifestDigest = digest.FromBytes(unverifiedManifest.Canonical) return imageID, manifestDigest, nil } func (p *v2Puller) pullSchema2Layers(ctx context.Context, target distribution.Descriptor, layers []distribution.Descriptor, platform *specs.Platform) (id digest.Digest, err error) { if _, err := p.config.ImageStore.Get(ctx, target.Digest); err == nil { // If the image already exists locally, no need to pull // anything. return target.Digest, nil } var descriptors []xfer.DownloadDescriptor // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for _, d := range layers { if err := d.Digest.Validate(); err != nil { return "", errors.Wrapf(err, "could not validate layer digest %q", d.Digest) } layerDescriptor := &v2LayerDescriptor{ digest: d.Digest, repo: p.repo, repoInfo: p.repoInfo, V2MetadataService: p.V2MetadataService, src: d, } descriptors = append(descriptors, layerDescriptor) } configChan := make(chan []byte, 1) configErrChan := make(chan error, 1) layerErrChan := make(chan error, 1) downloadsDone := make(chan struct{}) var cancel func() ctx, cancel = context.WithCancel(ctx) defer cancel() // Pull the image config go func() { configJSON, err := p.pullSchema2Config(ctx, target.Digest) if err != nil { configErrChan <- ImageConfigPullError{Err: err} cancel() return } configChan <- configJSON }() var ( configJSON []byte // raw serialized image config downloadedRootFS *image.RootFS // rootFS from registered layers configRootFS *image.RootFS // rootFS from configuration release func() // release resources from rootFS download configPlatform *specs.Platform // for LCOW when registering downloaded layers ) layerStoreOS := runtime.GOOS if platform != nil { layerStoreOS = platform.OS } // https://github.com/docker/docker/issues/24766 - Err on the side of caution, // explicitly blocking images intended for linux from the Windows daemon. On // Windows, we do this before the attempt to download, effectively serialising // the download slightly slowing it down. We have to do it this way, as // chances are the download of layers itself would fail due to file names // which aren't suitable for NTFS. At some point in the future, if a similar // check to block Windows images being pulled on Linux is implemented, it // may be necessary to perform the same type of serialisation. if runtime.GOOS == "windows" { configJSON, configRootFS, configPlatform, err = receiveConfig(p.config.ImageStore, configChan, configErrChan) if err != nil { return "", err } if configRootFS == nil { return "", errRootFSInvalid } if err := checkImageCompatibility(configPlatform.OS, configPlatform.OSVersion); err != nil { return "", err } if len(descriptors) != len(configRootFS.DiffIDs) { return "", errRootFSMismatch } if platform == nil { // Early bath if the requested OS doesn't match that of the configuration. // This avoids doing the download, only to potentially fail later. if !system.IsOSSupported(configPlatform.OS) { return "", fmt.Errorf("cannot download image with operating system %q when requesting %q", configPlatform.OS, layerStoreOS) } layerStoreOS = configPlatform.OS } // Populate diff ids in descriptors to avoid downloading foreign layers // which have been side loaded for i := range descriptors { descriptors[i].(*v2LayerDescriptor).diffID = configRootFS.DiffIDs[i] } } if p.config.DownloadManager != nil { go func() { var ( err error rootFS image.RootFS ) downloadRootFS := *image.NewRootFS() rootFS, release, err = p.config.DownloadManager.Download(ctx, downloadRootFS, layerStoreOS, descriptors, p.config.ProgressOutput) if err != nil { // Intentionally do not cancel the config download here // as the error from config download (if there is one) // is more interesting than the layer download error layerErrChan <- err return } downloadedRootFS = &rootFS close(downloadsDone) }() } else { // We have nothing to download close(downloadsDone) } if configJSON == nil { configJSON, configRootFS, _, err = receiveConfig(p.config.ImageStore, configChan, configErrChan) if err == nil && configRootFS == nil { err = errRootFSInvalid } if err != nil { cancel() select { case <-downloadsDone: case <-layerErrChan: } return "", err } } select { case <-downloadsDone: case err = <-layerErrChan: return "", err } if release != nil { defer release() } if downloadedRootFS != nil { // The DiffIDs returned in rootFS MUST match those in the config. // Otherwise the image config could be referencing layers that aren't // included in the manifest. if len(downloadedRootFS.DiffIDs) != len(configRootFS.DiffIDs) { return "", errRootFSMismatch } for i := range downloadedRootFS.DiffIDs { if downloadedRootFS.DiffIDs[i] != configRootFS.DiffIDs[i] { return "", errRootFSMismatch } } } imageID, err := p.config.ImageStore.Put(ctx, configJSON) if err != nil { return "", err } return imageID, nil } func (p *v2Puller) pullSchema2(ctx context.Context, ref reference.Named, mfst *schema2.DeserializedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { manifestDigest, err = schema2ManifestDigest(ref, mfst) if err != nil { return "", "", err } id, err = p.pullSchema2Layers(ctx, mfst.Target(), mfst.Layers, platform) return id, manifestDigest, err } func (p *v2Puller) pullOCI(ctx context.Context, ref reference.Named, mfst *ocischema.DeserializedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { manifestDigest, err = schema2ManifestDigest(ref, mfst) if err != nil { return "", "", err } id, err = p.pullSchema2Layers(ctx, mfst.Target(), mfst.Layers, platform) return id, manifestDigest, err } func receiveConfig(s ImageConfigStore, configChan <-chan []byte, errChan <-chan error) ([]byte, *image.RootFS, *specs.Platform, error) { select { case configJSON := <-configChan: rootfs, err := s.RootFSFromConfig(configJSON) if err != nil { return nil, nil, nil, err } platform, err := s.PlatformFromConfig(configJSON) if err != nil { return nil, nil, nil, err } return configJSON, rootfs, platform, nil case err := <-errChan: return nil, nil, nil, err // Don't need a case for ctx.Done in the select because cancellation // will trigger an error in p.pullSchema2ImageConfig. } } // pullManifestList handles "manifest lists" which point to various // platform-specific manifests. func (p *v2Puller) pullManifestList(ctx context.Context, ref reference.Named, mfstList *manifestlist.DeserializedManifestList, pp *specs.Platform) (id digest.Digest, manifestListDigest digest.Digest, err error) { manifestListDigest, err = schema2ManifestDigest(ref, mfstList) if err != nil { return "", "", err } var platform specs.Platform if pp != nil { platform = *pp } logrus.Debugf("%s resolved to a manifestList object with %d entries; looking for a %s/%s match", ref, len(mfstList.Manifests), platforms.Format(platform), runtime.GOARCH) manifestMatches := filterManifests(mfstList.Manifests, platform) if len(manifestMatches) == 0 { errMsg := fmt.Sprintf("no matching manifest for %s in the manifest list entries", formatPlatform(platform)) logrus.Debugf(errMsg) return "", "", errors.New(errMsg) } if len(manifestMatches) > 1 { logrus.Debugf("found multiple matches in manifest list, choosing best match %s", manifestMatches[0].Digest.String()) } match := manifestMatches[0] if err := checkImageCompatibility(match.Platform.OS, match.Platform.OSVersion); err != nil { return "", "", err } desc := specs.Descriptor{ Digest: match.Digest, Size: match.Size, MediaType: match.MediaType, } manifest, err := p.manifestStore.Get(ctx, desc) if err != nil { return "", "", err } manifestRef, err := reference.WithDigest(reference.TrimNamed(ref), match.Digest) if err != nil { return "", "", err } switch v := manifest.(type) { case *schema1.SignedManifest: msg := fmt.Sprintf("[DEPRECATION NOTICE] v2 schema1 manifests in manifest lists are not supported and will break in a future release. Suggest author of %s to upgrade to v2 schema2. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/", ref) logrus.Warn(msg) progress.Message(p.config.ProgressOutput, "", msg) platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullSchema1(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } case *schema2.DeserializedManifest: platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullSchema2(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } case *ocischema.DeserializedManifest: platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullOCI(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } default: return "", "", errors.New("unsupported manifest format") } return id, manifestListDigest, err } func (p *v2Puller) pullSchema2Config(ctx context.Context, dgst digest.Digest) (configJSON []byte, err error) { blobs := p.repo.Blobs(ctx) configJSON, err = blobs.Get(ctx, dgst) if err != nil { return nil, err } // Verify image config digest verifier := dgst.Verifier() if _, err := verifier.Write(configJSON); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image config verification failed for digest %s", dgst) logrus.Error(err) return nil, err } return configJSON, nil } // schema2ManifestDigest computes the manifest digest, and, if pulling by // digest, ensures that it matches the requested digest. func schema2ManifestDigest(ref reference.Named, mfst distribution.Manifest) (digest.Digest, error) { _, canonical, err := mfst.Payload() if err != nil { return "", err } // If pull by digest, then verify the manifest digest. if digested, isDigested := ref.(reference.Canonical); isDigested { verifier := digested.Digest().Verifier() if _, err := verifier.Write(canonical); err != nil { return "", err } if !verifier.Verified() { err := fmt.Errorf("manifest verification failed for digest %s", digested.Digest()) logrus.Error(err) return "", err } return digested.Digest(), nil } return digest.FromBytes(canonical), nil } func verifySchema1Manifest(signedManifest *schema1.SignedManifest, ref reference.Reference) (m *schema1.Manifest, err error) { // If pull by digest, then verify the manifest digest. NOTE: It is // important to do this first, before any other content validation. If the // digest cannot be verified, don't even bother with those other things. if digested, isCanonical := ref.(reference.Canonical); isCanonical { verifier := digested.Digest().Verifier() if _, err := verifier.Write(signedManifest.Canonical); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image verification failed for digest %s", digested.Digest()) logrus.Error(err) return nil, err } } m = &signedManifest.Manifest if m.SchemaVersion != 1 { return nil, fmt.Errorf("unsupported schema version %d for %q", m.SchemaVersion, reference.FamiliarString(ref)) } if len(m.FSLayers) != len(m.History) { return nil, fmt.Errorf("length of history not equal to number of layers for %q", reference.FamiliarString(ref)) } if len(m.FSLayers) == 0 { return nil, fmt.Errorf("no FSLayers in manifest for %q", reference.FamiliarString(ref)) } return m, nil } // fixManifestLayers removes repeated layers from the manifest and checks the // correctness of the parent chain. func fixManifestLayers(m *schema1.Manifest) error { imgs := make([]*image.V1Image, len(m.FSLayers)) for i := range m.FSLayers { img := &image.V1Image{} if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), img); err != nil { return err } imgs[i] = img if err := v1.ValidateID(img.ID); err != nil { return err } } if imgs[len(imgs)-1].Parent != "" && runtime.GOOS != "windows" { // Windows base layer can point to a base layer parent that is not in manifest. return errors.New("invalid parent ID in the base layer of the image") } // check general duplicates to error instead of a deadlock idmap := make(map[string]struct{}) var lastID string for _, img := range imgs { // skip IDs that appear after each other, we handle those later if _, exists := idmap[img.ID]; img.ID != lastID && exists { return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID) } lastID = img.ID idmap[lastID] = struct{}{} } // backwards loop so that we keep the remaining indexes after removing items for i := len(imgs) - 2; i >= 0; i-- { if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue m.FSLayers = append(m.FSLayers[:i], m.FSLayers[i+1:]...) m.History = append(m.History[:i], m.History[i+1:]...) } else if imgs[i].Parent != imgs[i+1].ID { return fmt.Errorf("invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent) } } return nil } func createDownloadFile() (*os.File, error) { return ioutil.TempFile("", "GetImageBlob") } func toOCIPlatform(p manifestlist.PlatformSpec) specs.Platform { return specs.Platform{ OS: p.OS, Architecture: p.Architecture, Variant: p.Variant, OSFeatures: p.OSFeatures, OSVersion: p.OSVersion, } }
package distribution // import "github.com/docker/docker/distribution" import ( "context" "encoding/json" "fmt" "io" "io/ioutil" "os" "runtime" "github.com/containerd/containerd/log" "github.com/containerd/containerd/platforms" "github.com/docker/distribution" "github.com/docker/distribution/manifest/manifestlist" "github.com/docker/distribution/manifest/ocischema" "github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/reference" "github.com/docker/distribution/registry/client/transport" "github.com/docker/docker/distribution/metadata" "github.com/docker/docker/distribution/xfer" "github.com/docker/docker/image" v1 "github.com/docker/docker/image/v1" "github.com/docker/docker/layer" "github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/progress" "github.com/docker/docker/pkg/stringid" "github.com/docker/docker/pkg/system" refstore "github.com/docker/docker/reference" "github.com/docker/docker/registry" digest "github.com/opencontainers/go-digest" specs "github.com/opencontainers/image-spec/specs-go/v1" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) var ( errRootFSMismatch = errors.New("layers from manifest don't match image configuration") errRootFSInvalid = errors.New("invalid rootfs in image configuration") ) // ImageConfigPullError is an error pulling the image config blob // (only applies to schema2). type ImageConfigPullError struct { Err error } // Error returns the error string for ImageConfigPullError. func (e ImageConfigPullError) Error() string { return "error pulling image configuration: " + e.Err.Error() } type v2Puller struct { V2MetadataService metadata.V2MetadataService endpoint registry.APIEndpoint config *ImagePullConfig repoInfo *registry.RepositoryInfo repo distribution.Repository manifestStore *manifestStore } func (p *v2Puller) Pull(ctx context.Context, ref reference.Named, platform *specs.Platform) (err error) { // TODO(tiborvass): was ReceiveTimeout p.repo, err = NewV2Repository(ctx, p.repoInfo, p.endpoint, p.config.MetaHeaders, p.config.AuthConfig, "pull") if err != nil { logrus.Warnf("Error getting v2 registry: %v", err) return err } p.manifestStore.remote, err = p.repo.Manifests(ctx) if err != nil { return err } if err = p.pullV2Repository(ctx, ref, platform); err != nil { if _, ok := err.(fallbackError); ok { return err } if continueOnError(err, p.endpoint.Mirror) { return fallbackError{ err: err, transportOK: true, } } } return err } func (p *v2Puller) pullV2Repository(ctx context.Context, ref reference.Named, platform *specs.Platform) (err error) { var layersDownloaded bool if !reference.IsNameOnly(ref) { layersDownloaded, err = p.pullV2Tag(ctx, ref, platform) if err != nil { return err } } else { tags, err := p.repo.Tags(ctx).All(ctx) if err != nil { return err } for _, tag := range tags { tagRef, err := reference.WithTag(ref, tag) if err != nil { return err } pulledNew, err := p.pullV2Tag(ctx, tagRef, platform) if err != nil { // Since this is the pull-all-tags case, don't // allow an error pulling a particular tag to // make the whole pull fall back to v1. if fallbackErr, ok := err.(fallbackError); ok { return fallbackErr.err } return err } // pulledNew is true if either new layers were downloaded OR if existing images were newly tagged // TODO(tiborvass): should we change the name of `layersDownload`? What about message in WriteStatus? layersDownloaded = layersDownloaded || pulledNew } } writeStatus(reference.FamiliarString(ref), p.config.ProgressOutput, layersDownloaded) return nil } type v2LayerDescriptor struct { digest digest.Digest diffID layer.DiffID repoInfo *registry.RepositoryInfo repo distribution.Repository V2MetadataService metadata.V2MetadataService tmpFile *os.File verifier digest.Verifier src distribution.Descriptor } func (ld *v2LayerDescriptor) Key() string { return "v2:" + ld.digest.String() } func (ld *v2LayerDescriptor) ID() string { return stringid.TruncateID(ld.digest.String()) } func (ld *v2LayerDescriptor) DiffID() (layer.DiffID, error) { if ld.diffID != "" { return ld.diffID, nil } return ld.V2MetadataService.GetDiffID(ld.digest) } func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progress.Output) (io.ReadCloser, int64, error) { logrus.Debugf("pulling blob %q", ld.digest) var ( err error offset int64 ) if ld.tmpFile == nil { ld.tmpFile, err = createDownloadFile() if err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } else { offset, err = ld.tmpFile.Seek(0, io.SeekEnd) if err != nil { logrus.Debugf("error seeking to end of download file: %v", err) offset = 0 ld.tmpFile.Close() if err := os.Remove(ld.tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", ld.tmpFile.Name()) } ld.tmpFile, err = createDownloadFile() if err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } else if offset != 0 { logrus.Debugf("attempting to resume download of %q from %d bytes", ld.digest, offset) } } tmpFile := ld.tmpFile layerDownload, err := ld.open(ctx) if err != nil { logrus.Errorf("Error initiating layer download: %v", err) return nil, 0, retryOnError(err) } if offset != 0 { _, err := layerDownload.Seek(offset, io.SeekStart) if err != nil { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } } size, err := layerDownload.Seek(0, io.SeekEnd) if err != nil { // Seek failed, perhaps because there was no Content-Length // header. This shouldn't fail the download, because we can // still continue without a progress bar. size = 0 } else { if size != 0 && offset > size { logrus.Debug("Partial download is larger than full blob. Starting over") offset = 0 if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } } // Restore the seek offset either at the beginning of the // stream, or just after the last byte we have from previous // attempts. _, err = layerDownload.Seek(offset, io.SeekStart) if err != nil { return nil, 0, err } } reader := progress.NewProgressReader(ioutils.NewCancelReadCloser(ctx, layerDownload), progressOutput, size-offset, ld.ID(), "Downloading") defer reader.Close() if ld.verifier == nil { ld.verifier = ld.digest.Verifier() } _, err = io.Copy(tmpFile, io.TeeReader(reader, ld.verifier)) if err != nil { if err == transport.ErrWrongCodeForByteRange { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } return nil, 0, retryOnError(err) } progress.Update(progressOutput, ld.ID(), "Verifying Checksum") if !ld.verifier.Verified() { err = fmt.Errorf("filesystem layer verification failed for digest %s", ld.digest) logrus.Error(err) // Allow a retry if this digest verification error happened // after a resumed download. if offset != 0 { if err := ld.truncateDownloadFile(); err != nil { return nil, 0, xfer.DoNotRetry{Err: err} } return nil, 0, err } return nil, 0, xfer.DoNotRetry{Err: err} } progress.Update(progressOutput, ld.ID(), "Download complete") logrus.Debugf("Downloaded %s to tempfile %s", ld.ID(), tmpFile.Name()) _, err = tmpFile.Seek(0, io.SeekStart) if err != nil { tmpFile.Close() if err := os.Remove(tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name()) } ld.tmpFile = nil ld.verifier = nil return nil, 0, xfer.DoNotRetry{Err: err} } // hand off the temporary file to the download manager, so it will only // be closed once ld.tmpFile = nil return ioutils.NewReadCloserWrapper(tmpFile, func() error { tmpFile.Close() err := os.RemoveAll(tmpFile.Name()) if err != nil { logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name()) } return err }), size, nil } func (ld *v2LayerDescriptor) Close() { if ld.tmpFile != nil { ld.tmpFile.Close() if err := os.RemoveAll(ld.tmpFile.Name()); err != nil { logrus.Errorf("Failed to remove temp file: %s", ld.tmpFile.Name()) } } } func (ld *v2LayerDescriptor) truncateDownloadFile() error { // Need a new hash context since we will be redoing the download ld.verifier = nil if _, err := ld.tmpFile.Seek(0, io.SeekStart); err != nil { logrus.Errorf("error seeking to beginning of download file: %v", err) return err } if err := ld.tmpFile.Truncate(0); err != nil { logrus.Errorf("error truncating download file: %v", err) return err } return nil } func (ld *v2LayerDescriptor) Registered(diffID layer.DiffID) { // Cache mapping from this layer's DiffID to the blobsum ld.V2MetadataService.Add(diffID, metadata.V2Metadata{Digest: ld.digest, SourceRepository: ld.repoInfo.Name.Name()}) } func (p *v2Puller) pullV2Tag(ctx context.Context, ref reference.Named, platform *specs.Platform) (tagUpdated bool, err error) { var ( tagOrDigest string // Used for logging/progress only dgst digest.Digest mt string size int64 tagged reference.NamedTagged isTagged bool ) if digested, isDigested := ref.(reference.Canonical); isDigested { dgst = digested.Digest() tagOrDigest = digested.String() } else if tagged, isTagged = ref.(reference.NamedTagged); isTagged { tagService := p.repo.Tags(ctx) desc, err := tagService.Get(ctx, tagged.Tag()) if err != nil { return false, err } dgst = desc.Digest tagOrDigest = tagged.Tag() mt = desc.MediaType size = desc.Size } else { return false, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", reference.FamiliarString(ref)) } ctx = log.WithLogger(ctx, logrus.WithFields( logrus.Fields{ "digest": dgst, "remote": ref, })) desc := specs.Descriptor{ MediaType: mt, Digest: dgst, Size: size, } manifest, err := p.manifestStore.Get(ctx, desc) if err != nil { if isTagged && isNotFound(errors.Cause(err)) { logrus.WithField("ref", ref).WithError(err).Debug("Falling back to pull manifest by tag") msg := `%s Failed to pull manifest by the resolved digest. This registry does not appear to conform to the distribution registry specification; falling back to pull by tag. This fallback is DEPRECATED, and will be removed in a future release. Please contact admins of %s. %s ` warnEmoji := "\U000026A0\U0000FE0F" progress.Messagef(p.config.ProgressOutput, "WARNING", msg, warnEmoji, p.endpoint.URL, warnEmoji) // Fetch by tag worked, but fetch by digest didn't. // This is a broken registry implementation. // We'll fallback to the old behavior and get the manifest by tag. var ms distribution.ManifestService ms, err = p.repo.Manifests(ctx) if err != nil { return false, err } manifest, err = ms.Get(ctx, "", distribution.WithTag(tagged.Tag())) err = errors.Wrap(err, "error after falling back to get manifest by tag") } if err != nil { return false, err } } if manifest == nil { return false, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } if m, ok := manifest.(*schema2.DeserializedManifest); ok { var allowedMediatype bool for _, t := range p.config.Schema2Types { if m.Manifest.Config.MediaType == t { allowedMediatype = true break } } if !allowedMediatype { configClass := mediaTypeClasses[m.Manifest.Config.MediaType] if configClass == "" { configClass = "unknown" } return false, invalidManifestClassError{m.Manifest.Config.MediaType, configClass} } } logrus.Debugf("Pulling ref from V2 registry: %s", reference.FamiliarString(ref)) progress.Message(p.config.ProgressOutput, tagOrDigest, "Pulling from "+reference.FamiliarName(p.repo.Named())) var ( id digest.Digest manifestDigest digest.Digest ) switch v := manifest.(type) { case *schema1.SignedManifest: if p.config.RequireSchema2 { return false, fmt.Errorf("invalid manifest: not schema2") } // give registries time to upgrade to schema2 and only warn if we know a registry has been upgraded long time ago // TODO: condition to be removed if reference.Domain(ref) == "docker.io" { msg := fmt.Sprintf("Image %s uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/", ref) logrus.Warn(msg) progress.Message(p.config.ProgressOutput, "", msg) } id, manifestDigest, err = p.pullSchema1(ctx, ref, v, platform) if err != nil { return false, err } case *schema2.DeserializedManifest: id, manifestDigest, err = p.pullSchema2(ctx, ref, v, platform) if err != nil { return false, err } case *ocischema.DeserializedManifest: id, manifestDigest, err = p.pullOCI(ctx, ref, v, platform) if err != nil { return false, err } case *manifestlist.DeserializedManifestList: id, manifestDigest, err = p.pullManifestList(ctx, ref, v, platform) if err != nil { return false, err } default: return false, invalidManifestFormatError{} } progress.Message(p.config.ProgressOutput, "", "Digest: "+manifestDigest.String()) if p.config.ReferenceStore != nil { oldTagID, err := p.config.ReferenceStore.Get(ref) if err == nil { if oldTagID == id { return false, addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id) } } else if err != refstore.ErrDoesNotExist { return false, err } if canonical, ok := ref.(reference.Canonical); ok { if err = p.config.ReferenceStore.AddDigest(canonical, id, true); err != nil { return false, err } } else { if err = addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id); err != nil { return false, err } if err = p.config.ReferenceStore.AddTag(ref, id, true); err != nil { return false, err } } } return true, nil } func (p *v2Puller) pullSchema1(ctx context.Context, ref reference.Reference, unverifiedManifest *schema1.SignedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { if platform != nil { // Early bath if the requested OS doesn't match that of the configuration. // This avoids doing the download, only to potentially fail later. if !system.IsOSSupported(platform.OS) { return "", "", fmt.Errorf("cannot download image with operating system %q when requesting %q", runtime.GOOS, platform.OS) } } var verifiedManifest *schema1.Manifest verifiedManifest, err = verifySchema1Manifest(unverifiedManifest, ref) if err != nil { return "", "", err } rootFS := image.NewRootFS() // remove duplicate layers and check parent chain validity err = fixManifestLayers(verifiedManifest) if err != nil { return "", "", err } var descriptors []xfer.DownloadDescriptor // Image history converted to the new format var history []image.History // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for i := len(verifiedManifest.FSLayers) - 1; i >= 0; i-- { blobSum := verifiedManifest.FSLayers[i].BlobSum if err = blobSum.Validate(); err != nil { return "", "", errors.Wrapf(err, "could not validate layer digest %q", blobSum) } var throwAway struct { ThrowAway bool `json:"throwaway,omitempty"` } if err := json.Unmarshal([]byte(verifiedManifest.History[i].V1Compatibility), &throwAway); err != nil { return "", "", err } h, err := v1.HistoryFromConfig([]byte(verifiedManifest.History[i].V1Compatibility), throwAway.ThrowAway) if err != nil { return "", "", err } history = append(history, h) if throwAway.ThrowAway { continue } layerDescriptor := &v2LayerDescriptor{ digest: blobSum, repoInfo: p.repoInfo, repo: p.repo, V2MetadataService: p.V2MetadataService, } descriptors = append(descriptors, layerDescriptor) } resultRootFS, release, err := p.config.DownloadManager.Download(ctx, *rootFS, runtime.GOOS, descriptors, p.config.ProgressOutput) if err != nil { return "", "", err } defer release() config, err := v1.MakeConfigFromV1Config([]byte(verifiedManifest.History[0].V1Compatibility), &resultRootFS, history) if err != nil { return "", "", err } imageID, err := p.config.ImageStore.Put(ctx, config) if err != nil { return "", "", err } manifestDigest = digest.FromBytes(unverifiedManifest.Canonical) return imageID, manifestDigest, nil } func (p *v2Puller) pullSchema2Layers(ctx context.Context, target distribution.Descriptor, layers []distribution.Descriptor, platform *specs.Platform) (id digest.Digest, err error) { if _, err := p.config.ImageStore.Get(ctx, target.Digest); err == nil { // If the image already exists locally, no need to pull // anything. return target.Digest, nil } var descriptors []xfer.DownloadDescriptor // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for _, d := range layers { if err := d.Digest.Validate(); err != nil { return "", errors.Wrapf(err, "could not validate layer digest %q", d.Digest) } layerDescriptor := &v2LayerDescriptor{ digest: d.Digest, repo: p.repo, repoInfo: p.repoInfo, V2MetadataService: p.V2MetadataService, src: d, } descriptors = append(descriptors, layerDescriptor) } configChan := make(chan []byte, 1) configErrChan := make(chan error, 1) layerErrChan := make(chan error, 1) downloadsDone := make(chan struct{}) var cancel func() ctx, cancel = context.WithCancel(ctx) defer cancel() // Pull the image config go func() { configJSON, err := p.pullSchema2Config(ctx, target.Digest) if err != nil { configErrChan <- ImageConfigPullError{Err: err} cancel() return } configChan <- configJSON }() var ( configJSON []byte // raw serialized image config downloadedRootFS *image.RootFS // rootFS from registered layers configRootFS *image.RootFS // rootFS from configuration release func() // release resources from rootFS download configPlatform *specs.Platform // for LCOW when registering downloaded layers ) layerStoreOS := runtime.GOOS if platform != nil { layerStoreOS = platform.OS } // https://github.com/docker/docker/issues/24766 - Err on the side of caution, // explicitly blocking images intended for linux from the Windows daemon. On // Windows, we do this before the attempt to download, effectively serialising // the download slightly slowing it down. We have to do it this way, as // chances are the download of layers itself would fail due to file names // which aren't suitable for NTFS. At some point in the future, if a similar // check to block Windows images being pulled on Linux is implemented, it // may be necessary to perform the same type of serialisation. if runtime.GOOS == "windows" { configJSON, configRootFS, configPlatform, err = receiveConfig(p.config.ImageStore, configChan, configErrChan) if err != nil { return "", err } if configRootFS == nil { return "", errRootFSInvalid } if err := checkImageCompatibility(configPlatform.OS, configPlatform.OSVersion); err != nil { return "", err } if len(descriptors) != len(configRootFS.DiffIDs) { return "", errRootFSMismatch } if platform == nil { // Early bath if the requested OS doesn't match that of the configuration. // This avoids doing the download, only to potentially fail later. if !system.IsOSSupported(configPlatform.OS) { return "", fmt.Errorf("cannot download image with operating system %q when requesting %q", configPlatform.OS, layerStoreOS) } layerStoreOS = configPlatform.OS } // Populate diff ids in descriptors to avoid downloading foreign layers // which have been side loaded for i := range descriptors { descriptors[i].(*v2LayerDescriptor).diffID = configRootFS.DiffIDs[i] } } if p.config.DownloadManager != nil { go func() { var ( err error rootFS image.RootFS ) downloadRootFS := *image.NewRootFS() rootFS, release, err = p.config.DownloadManager.Download(ctx, downloadRootFS, layerStoreOS, descriptors, p.config.ProgressOutput) if err != nil { // Intentionally do not cancel the config download here // as the error from config download (if there is one) // is more interesting than the layer download error layerErrChan <- err return } downloadedRootFS = &rootFS close(downloadsDone) }() } else { // We have nothing to download close(downloadsDone) } if configJSON == nil { configJSON, configRootFS, _, err = receiveConfig(p.config.ImageStore, configChan, configErrChan) if err == nil && configRootFS == nil { err = errRootFSInvalid } if err != nil { cancel() select { case <-downloadsDone: case <-layerErrChan: } return "", err } } select { case <-downloadsDone: case err = <-layerErrChan: return "", err } if release != nil { defer release() } if downloadedRootFS != nil { // The DiffIDs returned in rootFS MUST match those in the config. // Otherwise the image config could be referencing layers that aren't // included in the manifest. if len(downloadedRootFS.DiffIDs) != len(configRootFS.DiffIDs) { return "", errRootFSMismatch } for i := range downloadedRootFS.DiffIDs { if downloadedRootFS.DiffIDs[i] != configRootFS.DiffIDs[i] { return "", errRootFSMismatch } } } imageID, err := p.config.ImageStore.Put(ctx, configJSON) if err != nil { return "", err } return imageID, nil } func (p *v2Puller) pullSchema2(ctx context.Context, ref reference.Named, mfst *schema2.DeserializedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { manifestDigest, err = schema2ManifestDigest(ref, mfst) if err != nil { return "", "", err } id, err = p.pullSchema2Layers(ctx, mfst.Target(), mfst.Layers, platform) return id, manifestDigest, err } func (p *v2Puller) pullOCI(ctx context.Context, ref reference.Named, mfst *ocischema.DeserializedManifest, platform *specs.Platform) (id digest.Digest, manifestDigest digest.Digest, err error) { manifestDigest, err = schema2ManifestDigest(ref, mfst) if err != nil { return "", "", err } id, err = p.pullSchema2Layers(ctx, mfst.Target(), mfst.Layers, platform) return id, manifestDigest, err } func receiveConfig(s ImageConfigStore, configChan <-chan []byte, errChan <-chan error) ([]byte, *image.RootFS, *specs.Platform, error) { select { case configJSON := <-configChan: rootfs, err := s.RootFSFromConfig(configJSON) if err != nil { return nil, nil, nil, err } platform, err := s.PlatformFromConfig(configJSON) if err != nil { return nil, nil, nil, err } return configJSON, rootfs, platform, nil case err := <-errChan: return nil, nil, nil, err // Don't need a case for ctx.Done in the select because cancellation // will trigger an error in p.pullSchema2ImageConfig. } } // pullManifestList handles "manifest lists" which point to various // platform-specific manifests. func (p *v2Puller) pullManifestList(ctx context.Context, ref reference.Named, mfstList *manifestlist.DeserializedManifestList, pp *specs.Platform) (id digest.Digest, manifestListDigest digest.Digest, err error) { manifestListDigest, err = schema2ManifestDigest(ref, mfstList) if err != nil { return "", "", err } var platform specs.Platform if pp != nil { platform = *pp } logrus.Debugf("%s resolved to a manifestList object with %d entries; looking for a %s/%s match", ref, len(mfstList.Manifests), platforms.Format(platform), runtime.GOARCH) manifestMatches := filterManifests(mfstList.Manifests, platform) if len(manifestMatches) == 0 { errMsg := fmt.Sprintf("no matching manifest for %s in the manifest list entries", formatPlatform(platform)) logrus.Debugf(errMsg) return "", "", errors.New(errMsg) } if len(manifestMatches) > 1 { logrus.Debugf("found multiple matches in manifest list, choosing best match %s", manifestMatches[0].Digest.String()) } match := manifestMatches[0] if err := checkImageCompatibility(match.Platform.OS, match.Platform.OSVersion); err != nil { return "", "", err } desc := specs.Descriptor{ Digest: match.Digest, Size: match.Size, MediaType: match.MediaType, } manifest, err := p.manifestStore.Get(ctx, desc) if err != nil { return "", "", err } manifestRef, err := reference.WithDigest(reference.TrimNamed(ref), match.Digest) if err != nil { return "", "", err } switch v := manifest.(type) { case *schema1.SignedManifest: msg := fmt.Sprintf("[DEPRECATION NOTICE] v2 schema1 manifests in manifest lists are not supported and will break in a future release. Suggest author of %s to upgrade to v2 schema2. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/", ref) logrus.Warn(msg) progress.Message(p.config.ProgressOutput, "", msg) platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullSchema1(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } case *schema2.DeserializedManifest: platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullSchema2(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } case *ocischema.DeserializedManifest: platform := toOCIPlatform(manifestMatches[0].Platform) id, _, err = p.pullOCI(ctx, manifestRef, v, &platform) if err != nil { return "", "", err } default: return "", "", errors.New("unsupported manifest format") } return id, manifestListDigest, err } func (p *v2Puller) pullSchema2Config(ctx context.Context, dgst digest.Digest) (configJSON []byte, err error) { blobs := p.repo.Blobs(ctx) configJSON, err = blobs.Get(ctx, dgst) if err != nil { return nil, err } // Verify image config digest verifier := dgst.Verifier() if _, err := verifier.Write(configJSON); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image config verification failed for digest %s", dgst) logrus.Error(err) return nil, err } return configJSON, nil } // schema2ManifestDigest computes the manifest digest, and, if pulling by // digest, ensures that it matches the requested digest. func schema2ManifestDigest(ref reference.Named, mfst distribution.Manifest) (digest.Digest, error) { _, canonical, err := mfst.Payload() if err != nil { return "", err } // If pull by digest, then verify the manifest digest. if digested, isDigested := ref.(reference.Canonical); isDigested { verifier := digested.Digest().Verifier() if _, err := verifier.Write(canonical); err != nil { return "", err } if !verifier.Verified() { err := fmt.Errorf("manifest verification failed for digest %s", digested.Digest()) logrus.Error(err) return "", err } return digested.Digest(), nil } return digest.FromBytes(canonical), nil } func verifySchema1Manifest(signedManifest *schema1.SignedManifest, ref reference.Reference) (m *schema1.Manifest, err error) { // If pull by digest, then verify the manifest digest. NOTE: It is // important to do this first, before any other content validation. If the // digest cannot be verified, don't even bother with those other things. if digested, isCanonical := ref.(reference.Canonical); isCanonical { verifier := digested.Digest().Verifier() if _, err := verifier.Write(signedManifest.Canonical); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image verification failed for digest %s", digested.Digest()) logrus.Error(err) return nil, err } } m = &signedManifest.Manifest if m.SchemaVersion != 1 { return nil, fmt.Errorf("unsupported schema version %d for %q", m.SchemaVersion, reference.FamiliarString(ref)) } if len(m.FSLayers) != len(m.History) { return nil, fmt.Errorf("length of history not equal to number of layers for %q", reference.FamiliarString(ref)) } if len(m.FSLayers) == 0 { return nil, fmt.Errorf("no FSLayers in manifest for %q", reference.FamiliarString(ref)) } return m, nil } // fixManifestLayers removes repeated layers from the manifest and checks the // correctness of the parent chain. func fixManifestLayers(m *schema1.Manifest) error { imgs := make([]*image.V1Image, len(m.FSLayers)) for i := range m.FSLayers { img := &image.V1Image{} if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), img); err != nil { return err } imgs[i] = img if err := v1.ValidateID(img.ID); err != nil { return err } } if imgs[len(imgs)-1].Parent != "" && runtime.GOOS != "windows" { // Windows base layer can point to a base layer parent that is not in manifest. return errors.New("invalid parent ID in the base layer of the image") } // check general duplicates to error instead of a deadlock idmap := make(map[string]struct{}) var lastID string for _, img := range imgs { // skip IDs that appear after each other, we handle those later if _, exists := idmap[img.ID]; img.ID != lastID && exists { return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID) } lastID = img.ID idmap[lastID] = struct{}{} } // backwards loop so that we keep the remaining indexes after removing items for i := len(imgs) - 2; i >= 0; i-- { if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue m.FSLayers = append(m.FSLayers[:i], m.FSLayers[i+1:]...) m.History = append(m.History[:i], m.History[i+1:]...) } else if imgs[i].Parent != imgs[i+1].ID { return fmt.Errorf("invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent) } } return nil } func createDownloadFile() (*os.File, error) { return ioutil.TempFile("", "GetImageBlob") } func toOCIPlatform(p manifestlist.PlatformSpec) specs.Platform { return specs.Platform{ OS: p.OS, Architecture: p.Architecture, Variant: p.Variant, OSFeatures: p.OSFeatures, OSVersion: p.OSVersion, } }
thaJeztah
3ad9549e70bdf45b40c6332b221cd5c7fd635524
51b06c6795160d8a1ba05d05d6491df7588b2957
Comment was already there. I can only assume John wrote that, as it looks to be an 🇬🇧 expression https://dictionary.cambridge.org/dictionary/english/an-early-bath
thaJeztah
4,517
moby/moby
42,683
Remove LCOW (step 6)
Splitting off more bits from https://github.com/moby/moby/pull/42170
null
2021-07-27 11:33:51+00:00
2021-07-29 18:34:29+00:00
hack/ci/windows.ps1
# WARNING: When editing this file, consider submitting a PR to # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/executeCI.ps1, and make sure that # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1 isn't broken. # Validate using a test context in Jenkins, then copy/paste into Jenkins production. # # Jenkins CI scripts for Windows to Windows CI (Powershell Version) # By John Howard (@jhowardmsft) January 2016 - bash version; July 2016 Ported to PowerShell $ErrorActionPreference = 'Stop' $StartTime=Get-Date Write-Host -ForegroundColor Red "DEBUG: print all environment variables to check how Jenkins runs this script" $allArgs = [Environment]::GetCommandLineArgs() Write-Host -ForegroundColor Red $allArgs Write-Host -ForegroundColor Red "----------------------------------------------------------------------------" # ------------------------------------------------------------------------------------------- # When executed, we rely on four variables being set in the environment: # # [The reason for being environment variables rather than parameters is historical. No reason # why it couldn't be updated.] # # SOURCES_DRIVE is the drive on which the sources being tested are cloned from. # This should be a straight drive letter, no platform semantics. # For example 'c' # # SOURCES_SUBDIR is the top level directory under SOURCES_DRIVE where the # sources are cloned to. There are no platform semantics in this # as it does not include slashes. # For example 'gopath' # # Based on the above examples, it would be expected that Jenkins # would clone the sources being tested to # SOURCES_DRIVE\SOURCES_SUBDIR\src\github.com\docker\docker, or # c:\gopath\src\github.com\docker\docker # # TESTRUN_DRIVE is the drive where we build the binary on and redirect everything # to for the daemon under test. On an Azure D2 type host which has # an SSD temporary storage D: drive, this is ideal for performance. # For example 'd' # # TESTRUN_SUBDIR is the top level directory under TESTRUN_DRIVE where we redirect # everything to for the daemon under test. For example 'CI'. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\CI-<CommitID> or # d:\CI\CI-<CommitID> # # Optional environment variables help in CI: # # BUILD_NUMBER + BRANCH_NAME are optional variables to be added to the directory below TESTRUN_SUBDIR # to have individual folder per CI build. If some files couldn't be # cleaned up and we want to re-run the build in CI. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\PR-<PR-Number>\<BuildNumber> or # d:\CI\PR-<PR-Number>\<BuildNumber> # # In addition, the following variables can control the run configuration: # # DOCKER_DUT_DEBUG if defined starts the daemon under test in debug mode. # # DOCKER_STORAGE_OPTS comma-separated list of optional storage driver options for the daemon under test # examples: # DOCKER_STORAGE_OPTS="size=40G" # DOCKER_STORAGE_OPTS="lcow.globalmode=false,lcow.kernel=kernel.efi" # # SKIP_VALIDATION_TESTS if defined skips the validation tests # # SKIP_UNIT_TESTS if defined skips the unit tests # # SKIP_INTEGRATION_TESTS if defined skips the integration tests # # SKIP_COPY_GO if defined skips copy the go installer from the image # # DOCKER_DUT_HYPERV if default daemon under test default isolation is hyperv # # INTEGRATION_TEST_NAME to only run partial tests eg "TestInfo*" will only run # any tests starting "TestInfo" # # SKIP_BINARY_BUILD if defined skips building the binary # # SKIP_ZAP_DUT if defined doesn't zap the daemon under test directory # # SKIP_IMAGE_BUILD if defined doesn't build the 'docker' image # # INTEGRATION_IN_CONTAINER if defined, runs the integration tests from inside a container. # As of July 2016, there are known issues with this. # # SKIP_ALL_CLEANUP if defined, skips any cleanup at the start or end of the run # # WINDOWS_BASE_IMAGE if defined, uses that as the base image. Note that the # docker integration tests are also coded to use the same # environment variable, and if no set, defaults to microsoft/windowsservercore # # WINDOWS_BASE_IMAGE_TAG if defined, uses that as the tag name for the base image. # if no set, defaults to latest # # ------------------------------------------------------------------------------------------- # # Jenkins Integration. Add a Windows Powershell build step as follows: # # Write-Host -ForegroundColor green "INFO: Jenkins build step starting" # $CISCRIPT_DEFAULT_LOCATION = "https://raw.githubusercontent.com/moby/moby/master/hack/ci/windows.ps1" # $CISCRIPT_LOCAL_LOCATION = "$env:TEMP\executeCI.ps1" # Write-Host -ForegroundColor green "INFO: Removing cached execution script" # Remove-Item $CISCRIPT_LOCAL_LOCATION -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # $wc = New-Object net.webclient # try { # Write-Host -ForegroundColor green "INFO: Downloading latest execution script..." # $wc.Downloadfile($CISCRIPT_DEFAULT_LOCATION, $CISCRIPT_LOCAL_LOCATION) # } # catch [System.Net.WebException] # { # Throw ("Failed to download: $_") # } # & $CISCRIPT_LOCAL_LOCATION # ------------------------------------------------------------------------------------------- $SCRIPT_VER="05-Feb-2019 09:03 PDT" $FinallyColour="Cyan" #$env:DOCKER_DUT_DEBUG="yes" # Comment out to not be in debug mode #$env:SKIP_UNIT_TESTS="yes" #$env:SKIP_VALIDATION_TESTS="yes" #$env:SKIP_ZAP_DUT="" #$env:SKIP_BINARY_BUILD="yes" #$env:INTEGRATION_TEST_NAME="" #$env:SKIP_IMAGE_BUILD="yes" #$env:SKIP_ALL_CLEANUP="yes" #$env:INTEGRATION_IN_CONTAINER="yes" #$env:WINDOWS_BASE_IMAGE="" #$env:SKIP_COPY_GO="yes" #$env:INTEGRATION_TESTFLAGS="-test.v" Function Nuke-Everything { $ErrorActionPreference = 'SilentlyContinue' try { if ($null -eq $env:SKIP_ALL_CLEANUP) { Write-Host -ForegroundColor green "INFO: Nuke-Everything..." $containerCount = ($(docker ps -aq | Measure-Object -line).Lines) if (-not $LastExitCode -eq 0) { Throw "ERROR: Failed to get container count from control daemon while nuking" } Write-Host -ForegroundColor green "INFO: Container count on control daemon to delete is $containerCount" if ($(docker ps -aq | Measure-Object -line).Lines -gt 0) { docker rm -f $(docker ps -aq) } $allImages = $(docker images --format "{{.Repository}}#{{.ID}}") $toRemove = ($allImages | Select-String -NotMatch "servercore","nanoserver","docker","busybox") $imageCount = ($toRemove | Measure-Object -line).Lines if ($imageCount -gt 0) { Write-Host -Foregroundcolor green "INFO: Non-base image count on control daemon to delete is $imageCount" docker rmi -f ($toRemove | Foreach-Object { $_.ToString().Split("#")[1] }) } } else { Write-Host -ForegroundColor Magenta "WARN: Skipping cleanup of images and containers" } # Kill any spurious daemons. The '-' is IMPORTANT otherwise will kill the control daemon! $pids=$(get-process | where-object {$_.ProcessName -like 'dockerd-*'}).id foreach ($p in $pids) { Write-Host "INFO: Killing daemon with PID $p" Stop-Process -Id $p -Force -ErrorAction SilentlyContinue } if ($null -ne $pidFile) { Write-Host "INFO: Tidying pidfile $pidfile" if (Test-Path $pidFile) { $p=Get-Content $pidFile -raw if ($null -ne $p){ Write-Host -ForegroundColor green "INFO: Stopping possible daemon pid $p" taskkill -f -t -pid $p } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } } Stop-Process -name "cc1" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "link" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "compile" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "ld" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "go" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git-remote-https" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "integration-cli.test" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "tail" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # Detach any VHDs gwmi msvm_mountedstorageimage -namespace root/virtualization/v2 -ErrorAction SilentlyContinue | foreach-object {$_.DetachVirtualHardDisk() } # Stop any compute processes Get-ComputeProcess | Stop-ComputeProcess -Force # Delete the directory using our dangerous utility unless told not to if (Test-Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR") { if (($null -ne $env:SKIP_ZAP_DUT) -or ($null -eq $env:SKIP_ALL_CLEANUP)) { Write-Host -ForegroundColor Green "INFO: Nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" docker-ci-zap "-folder=$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } else { Write-Host -ForegroundColor Magenta "WARN: Skip nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 Production Server workaround - Psched $reg = "HKLM:\System\CurrentControlSet\Services\Psched\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under Psched\Parameters" Write-Warning "Cleaning Psched..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 $reg = "HKLM:\System\CurrentControlSet\Services\WFPLWFS\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under WFPLWFS\Parameters" Write-Warning "Cleaning WFPLWFS..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } } catch { # Don't throw any errors onwards Throw $_ } } Try { Write-Host -ForegroundColor Cyan "`nINFO: executeCI.ps1 starting at $(date)`n" Write-Host -ForegroundColor Green "INFO: Script version $SCRIPT_VER" Set-PSDebug -Trace 0 # 1 to turn on $origPath="$env:PATH" # so we can restore it at the end $origDOCKER_HOST="$DOCKER_HOST" # So we can restore it at the end $origGOROOT="$env:GOROOT" # So we can restore it at the end $origGOPATH="$env:GOPATH" # So we can restore it at the end # Turn off progress bars $origProgressPreference=$global:ProgressPreference $global:ProgressPreference='SilentlyContinue' # Git version Write-Host -ForegroundColor Green "INFO: Running $(git version)" # OS Version $bl=(Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name BuildLabEx).BuildLabEx $a=$bl.ToString().Split(".") $Branch=$a[3] $WindowsBuild=$a[0]+"."+$a[1]+"."+$a[4] Write-Host -ForegroundColor green "INFO: Branch:$Branch Build:$WindowsBuild" # List the environment variables Write-Host -ForegroundColor green "INFO: Environment variables:" Get-ChildItem Env: | Out-String # PR if (-not ($null -eq $env:PR)) { Write-Output "INFO: PR#$env:PR (https://github.com/docker/docker/pull/$env:PR)" } # Make sure docker is installed if ($null -eq (Get-Command "docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker is not installed or not found on path" } # Make sure docker-ci-zap is installed if ($null -eq (Get-Command "docker-ci-zap" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker-ci-zap is not installed or not found on path" } # Make sure Windows Defender is disabled $defender = $false Try { $status = Get-MpComputerStatus if ($status) { if ($status.RealTimeProtectionEnabled) { $defender = $true } } } Catch {} if ($defender) { Write-Host -ForegroundColor Magenta "WARN: Windows Defender real time protection is enabled, which may cause some integration tests to fail" } # Make sure SOURCES_DRIVE is set if ($null -eq $env:SOURCES_DRIVE) { Throw "ERROR: Environment variable SOURCES_DRIVE is not set" } # Make sure TESTRUN_DRIVE is set if ($null -eq $env:TESTRUN_DRIVE) { Throw "ERROR: Environment variable TESTRUN_DRIVE is not set" } # Make sure SOURCES_SUBDIR is set if ($null -eq $env:SOURCES_SUBDIR) { Throw "ERROR: Environment variable SOURCES_SUBDIR is not set" } # Make sure TESTRUN_SUBDIR is set if ($null -eq $env:TESTRUN_SUBDIR) { Throw "ERROR: Environment variable TESTRUN_SUBDIR is not set" } # SOURCES_DRIVE\SOURCES_SUBDIR must be a directory and exist if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR")) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR must be an existing directory" } # Create the TESTRUN_DRIVE\TESTRUN_SUBDIR if it does not already exist New-Item -ItemType Directory -Force -Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" -ErrorAction SilentlyContinue | Out-Null Write-Host -ForegroundColor Green "INFO: Sources under $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\..." Write-Host -ForegroundColor Green "INFO: Test run under $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\..." # Check the intended source location is a directory if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker is not a directory!" } # Make sure we start at the root of the sources Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Running in $(Get-Location)" # Make sure we are in repo if (-not (Test-Path -PathType Leaf -Path ".\Dockerfile.windows")) { Throw "$(Get-Location) does not contain Dockerfile.windows!" } Write-Host -ForegroundColor Green "INFO: docker/docker repository was found" # Make sure microsoft/windowsservercore:latest image is installed in the control daemon. On public CI machines, windowsservercore.tar and nanoserver.tar # are pre-baked and tagged appropriately in the c:\baseimages directory, and can be directly loaded. # Note - this script will only work on 10B (Oct 2016) or later machines! Not 9D or previous due to image tagging assumptions. # # On machines not on Microsoft corpnet, or those which have not been pre-baked, we have to docker pull the image in which case it will # will come in directly as microsoft/windowsservercore:latest. The ultimate goal of all this code is to ensure that whatever, # we have microsoft/windowsservercore:latest # # Note we cannot use (as at Oct 2016) nanoserver as the control daemons base image, even if nanoserver is used in the tests themselves. $ErrorActionPreference = "SilentlyContinue" $ControlDaemonBaseImage="windowsservercore" $readBaseFrom="c" if ($((docker images --format "{{.Repository}}:{{.Tag}}" | Select-String $("microsoft/"+$ControlDaemonBaseImage+":latest") | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("$env:SOURCES_DRIVE`:\baseimages\"+$ControlDaemonBaseImage+".tar")) { # An optimization for CI servers to copy it to the D: drive which is an SSD. if ($env:SOURCES_DRIVE -ne $env:TESTRUN_DRIVE) { $readBaseFrom=$env:TESTRUN_DRIVE if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages")) { New-Item "$env:TESTRUN_DRIVE`:\baseimages" -type directory | Out-Null } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\windowsservercore.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\nanoserver.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\nanoserver.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } $readBaseFrom=$env:TESTRUN_DRIVE } Write-Host -ForegroundColor Green "INFO: Loading"$ControlDaemonBaseImage".tar from disk. This may take some time..." $ErrorActionPreference = "SilentlyContinue" docker load -i $("$readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") } Write-Host -ForegroundColor Green "INFO: docker load of"$ControlDaemonBaseImage" completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:latest Write-Host -ForegroundColor Green $("INFO: Pulling $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG from docker hub. This may take some time...") $ErrorActionPreference = "SilentlyContinue" docker pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage") docker tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image"$("microsoft/"+$ControlDaemonBaseImage+":latest")"is already loaded in the control daemon" } # Inspect the pulled image to get the version directly $ErrorActionPreference = "SilentlyContinue" $imgVersion = $(docker inspect $("microsoft/"+$ControlDaemonBaseImage) --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of microsoft/"+$ControlDaemonBaseImage+":latest is '"+$imgVersion+"'") # Provide the docker version for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker version $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Write-Host Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host -ForegroundColor Green " Failed to get a response from the control daemon. It may be down." Write-Host -ForegroundColor Green " Try re-running this CI job, or ask on #docker-maintainers on docker slack" Write-Host -ForegroundColor Green " to see if the daemon is running. Also check the service configuration." Write-Host -ForegroundColor Green " DOCKER_HOST is set to $DOCKER_HOST." Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Same as above, but docker info Write-Host -ForegroundColor Green "INFO: Docker info of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker info $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Get the commit has and verify we have something $ErrorActionPreference = "SilentlyContinue" $COMMITHASH=$(git rev-parse --short HEAD) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" } Write-Host -ForegroundColor Green "INFO: Commit hash is $COMMITHASH" # Nuke everything and go back to our sources after Nuke-Everything cd "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" # Redirect to a temporary location. $TEMPORIG=$env:TEMP if ($null -eq $env:BUILD_NUMBER) { $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\CI-$COMMITHASH" } else { # individual temporary location per CI build that better matches the BUILD_URL $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\$env:BRANCH_NAME\$env:BUILD_NUMBER" } $env:LOCALAPPDATA="$env:TEMP\localappdata" $errorActionPreference='Stop' New-Item -ItemType Directory "$env:TEMP" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\userprofile" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults\unittests" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\localappdata" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\binary" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\installer" -ErrorAction SilentlyContinue | Out-Null if ($null -eq $env:SKIP_COPY_GO) { # Wipe the previous version of GO - we're going to get it out of the image if (Test-Path "$env:TEMP\go") { Remove-Item "$env:TEMP\go" -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } New-Item -ItemType Directory "$env:TEMP\go" -ErrorAction SilentlyContinue | Out-Null } Write-Host -ForegroundColor Green "INFO: Location for testing is $env:TEMP" # CI Integrity check - ensure Dockerfile.windows and Dockerfile go versions match $goVersionDockerfileWindows=(Select-String -Path ".\Dockerfile.windows" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value $goVersionDockerfile=(Select-String -Path ".\Dockerfile" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value if ($null -eq $goVersionDockerfile) { Throw "ERROR: Failed to extract golang version from Dockerfile" } Write-Host -ForegroundColor Green "INFO: Validating GOLang consistency in Dockerfile.windows..." if (-not ($goVersionDockerfile -eq $goVersionDockerfileWindows)) { Throw "ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. $goVersionDockerfile $goVersionDockerfileWindows" } # Build the image if ($null -eq $env:SKIP_IMAGE_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the image from Dockerfile.windows at $(Get-Date)..." Write-Host $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { docker build --build-arg=GO_VERSION -t docker -f Dockerfile.windows . | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build image from Dockerfile.windows" } Write-Host -ForegroundColor Green "INFO: Image build ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the docker image" } # Following at the moment must be docker\docker as it's dictated by dockerfile.Windows $contPath="$COMMITHASH`:c`:\gopath\src\github.com\docker\docker\bundles" # After https://github.com/docker/docker/pull/30290, .git was added to .dockerignore. Therefore # we have to calculate unsupported outside of the container, and pass the commit ID in through # an environment variable for the binary build $CommitUnsupported="" if ($(git status --porcelain --untracked-files=no).Length -ne 0) { $CommitUnsupported="-unsupported" } # Build the binary in a container unless asked to skip it. if ($null -eq $env:SKIP_BINARY_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the test binaries at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" docker rm -f $COMMITHASH 2>&1 | Out-Null if ($CommitUnsupported -ne "") { Write-Host "" Write-Warning "This version is unsupported because there are uncommitted file(s)." Write-Warning "Either commit these changes, or add them to .gitignore." git status --porcelain --untracked-files=no | Write-Warning Write-Host "" } $Duration=$(Measure-Command {docker run --name $COMMITHASH -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -Daemon -Client | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build binary" } Write-Host -ForegroundColor Green "INFO: Binaries build ended at $(Get-Date). Duration`:$Duration" # Copy the binaries and the generated version_autogen.go out of the container $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\docker.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the client binary (docker.exe) to $env:TEMP\binary" } docker cp "$contPath\dockerd.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the daemon binary (dockerd.exe) to $env:TEMP\binary" } docker cp "$COMMITHASH`:c`:\gopath\bin\gotestsum.exe" $env:TEMP\binary\ if (-not (Test-Path "$env:TEMP\binary\gotestsum.exe")) { Throw "ERROR: gotestsum.exe not found...." ` } $ErrorActionPreference = "Stop" # Copy the built dockerd.exe to dockerd-$COMMITHASH.exe so that easily spotted in task manager. Write-Host -ForegroundColor Green "INFO: Copying the built daemon binary to $env:TEMP\binary\dockerd-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\dockerd.exe $env:TEMP\binary\dockerd-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue # Copy the built docker.exe to docker-$COMMITHASH.exe Write-Host -ForegroundColor Green "INFO: Copying the built client binary to $env:TEMP\binary\docker-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\docker.exe $env:TEMP\binary\docker-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the binaries" } Write-Host -ForegroundColor Green "INFO: Copying dockerversion from the container..." $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\..\dockerversion\version_autogen.go" "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the generated version_autogen.go to $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" } $ErrorActionPreference = "Stop" # Grab the golang installer out of the built image. That way, we know we are consistent once extracted and paths set, # so there's no need to re-deploy on account of an upgrade to the version of GO being used in docker. if ($null -eq $env:SKIP_COPY_GO) { Write-Host -ForegroundColor Green "INFO: Copying the golang package from the container to $env:TEMP\installer\go.zip..." docker cp "$COMMITHASH`:c`:\go.zip" $env:TEMP\installer\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the golang installer 'go.zip' from container:c:\go.zip to $env:TEMP\installer" } $ErrorActionPreference = "Stop" # Extract the golang installer Write-Host -ForegroundColor Green "INFO: Extracting go.zip to $env:TEMP\go" $Duration=$(Measure-Command { Expand-Archive $env:TEMP\installer\go.zip $env:TEMP -Force | Out-Null}) Write-Host -ForegroundColor Green "INFO: Extraction ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping copying and extracting golang from the image" } # Set the GOPATH Write-Host -ForegroundColor Green "INFO: Updating the golang and path environment variables" $env:GOPATH="$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR" Write-Host -ForegroundColor Green "INFO: GOPATH=$env:GOPATH" # Set the path to have the version of go from the image at the front $env:PATH="$env:TEMP\go\bin;$env:PATH" # Set the GOROOT to be our copy of go from the image $env:GOROOT="$env:TEMP\go" Write-Host -ForegroundColor Green "INFO: $(go version)" # Work out the -H parameter for the daemon under test (DASHH_DUT) and client under test (DASHH_CUT) #$DASHH_DUT="npipe:////./pipe/$COMMITHASH" # Can't do remote named pipe #$ip = (resolve-dnsname $env:COMPUTERNAME -type A -NoHostsFile -LlmnrNetbiosOnly).IPAddress # Useful to tie down $DASHH_CUT="tcp://127.0.0.1`:2357" # Not a typo for 2375! $DASHH_DUT="tcp://0.0.0.0:2357" # Not a typo for 2375! # Arguments for the daemon under test $dutArgs=@() $dutArgs += "-H $DASHH_DUT" $dutArgs += "--data-root $env:TEMP\daemon" $dutArgs += "--pidfile $env:TEMP\docker.pid" # Save the PID file so we can nuke it if set $pidFile="$env:TEMP\docker.pid" # Arguments: Are we starting the daemon under test in debug mode? if (-not ("$env:DOCKER_DUT_DEBUG" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test in debug mode" $dutArgs += "-D" } # Arguments: Are we starting the daemon under test with Hyper-V containers as the default isolation? if (-not ("$env:DOCKER_DUT_HYPERV" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with Hyper-V containers as the default" $dutArgs += "--exec-opt isolation=hyperv" } # Arguments: Allow setting optional storage-driver options # example usage: DOCKER_STORAGE_OPTS="lcow.globalmode=false,lcow.kernel=kernel.efi" if (-not ("$env:DOCKER_STORAGE_OPTS" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with storage-driver options ${env:DOCKER_STORAGE_OPTS}" $env:DOCKER_STORAGE_OPTS.Split(",") | ForEach { $dutArgs += "--storage-opt $_" } } # Start the daemon under test, ensuring everything is redirected to folders under $TEMP. # Important - we launch the -$COMMITHASH version so that we can kill it without # killing the control daemon. Write-Host -ForegroundColor Green "INFO: Starting a daemon under test..." Write-Host -ForegroundColor Green "INFO: Args: $dutArgs" New-Item -ItemType Directory $env:TEMP\daemon -ErrorAction SilentlyContinue | Out-Null # Cannot fathom why, but always writes to stderr.... Start-Process "$env:TEMP\binary\dockerd-$COMMITHASH" ` -ArgumentList $dutArgs ` -RedirectStandardOutput "$env:TEMP\dut.out" ` -RedirectStandardError "$env:TEMP\dut.err" Write-Host -ForegroundColor Green "INFO: Process started successfully." $daemonStarted=1 # Start tailing the daemon under test if the command is installed if ($null -ne (Get-Command "tail" -ErrorAction SilentlyContinue)) { Write-Host -ForegroundColor green "INFO: Start tailing logs of the daemon under tests" $tail = Start-Process "tail" -ArgumentList "-f $env:TEMP\dut.out" -PassThru -ErrorAction SilentlyContinue } # Verify we can get the daemon under test to respond $tries=20 Write-Host -ForegroundColor Green "INFO: Waiting for the daemon under test to start..." while ($true) { $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version 2>&1 | Out-Null $ErrorActionPreference = "Stop" if ($LastExitCode -eq 0) { break } $tries-- if ($tries -le 0) { Throw "ERROR: Failed to get a response from the daemon under test" } Write-Host -NoNewline "." sleep 1 } Write-Host -ForegroundColor Green "INFO: Daemon under test started and replied!" # Provide the docker version of the daemon under test for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker info Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker images Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Default to windowsservercore for the base image used for the tests. The "docker" image # and the control daemon use microsoft/windowsservercore regardless. This is *JUST* for the tests. if ($null -eq $env:WINDOWS_BASE_IMAGE) { $env:WINDOWS_BASE_IMAGE="microsoft/windowsservercore" } if ($null -eq $env:WINDOWS_BASE_IMAGE_TAG) { $env:WINDOWS_BASE_IMAGE_TAG="latest" } # Lowercase and make sure it has a microsoft/ prefix $env:WINDOWS_BASE_IMAGE = $env:WINDOWS_BASE_IMAGE.ToLower() if (! $($env:WINDOWS_BASE_IMAGE -Split "/")[0] -match "microsoft") { Throw "ERROR: WINDOWS_BASE_IMAGE should start microsoft/ or mcr.microsoft.com/" } Write-Host -ForegroundColor Green "INFO: Base image for tests is $env:WINDOWS_BASE_IMAGE" $ErrorActionPreference = "SilentlyContinue" if ($((& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images --format "{{.Repository}}:{{.Tag}}" | Select-String "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("c:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar")) { Write-Host -ForegroundColor Green "INFO: Loading"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]".tar from disk into the daemon under test. This may take some time..." $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" load -i $("$readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar into daemon under test") } Write-Host -ForegroundColor Green "INFO: docker load of"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]" into daemon under test completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:tagname Write-Host -ForegroundColor Green $("INFO: Pulling "+$env:WINDOWS_BASE_IMAGE+":"+$env:WINDOWS_BASE_IMAGE_TAG+" from docker hub into daemon under test. This may take some time...") $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage in daemon under test") & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is already loaded in the daemon under test" } # Inspect the pulled or loaded image to get the version directly $ErrorActionPreference = "SilentlyContinue" $dutimgVersion = $(&"$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" inspect "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is '"+$dutimgVersion+"'") # Run the validation tests unless SKIP_VALIDATION_TESTS is defined. if ($null -eq $env:SKIP_VALIDATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running validation tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { hack\make.ps1 -DCO -GoFormat -PkgImports | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Validation tests failed" } Write-Host -ForegroundColor Green "INFO: Validation tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping validation tests" } # Run the unit tests inside a container unless SKIP_UNIT_TESTS is defined if ($null -eq $env:SKIP_UNIT_TESTS) { $ContainerNameForUnitTests = $COMMITHASH + "_UnitTests" Write-Host -ForegroundColor Cyan "INFO: Running unit tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command {docker run --name $ContainerNameForUnitTests -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -TestUnit | Out-Host }) $TestRunExitCode = $LastExitCode $ErrorActionPreference = "Stop" # Saving where jenkins will take a look at..... New-Item -Force -ItemType Directory bundles | Out-Null $unitTestsContPath="$ContainerNameForUnitTests`:c`:\gopath\src\github.com\docker\docker\bundles" $JunitExpectedContFilePath = "$unitTestsContPath\junit-report-unit-tests.xml" docker cp $JunitExpectedContFilePath "bundles" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the unit tests report ($JunitExpectedContFilePath) to bundles" } if (Test-Path "bundles\junit-report-unit-tests.xml") { Write-Host -ForegroundColor Magenta "INFO: Unit tests results(bundles\junit-report-unit-tests.xml) exist. pwd=$pwd" } else { Write-Host -ForegroundColor Magenta "ERROR: Unit tests results(bundles\junit-report-unit-tests.xml) do not exist. pwd=$pwd" } if (-not($TestRunExitCode -eq 0)) { Throw "ERROR: Unit tests failed" } Write-Host -ForegroundColor Green "INFO: Unit tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping unit tests" } # Add the Windows busybox image. Needed for WCOW integration tests if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Green "INFO: Building busybox" $ErrorActionPreference = "SilentlyContinue" $(& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" build -t busybox --build-arg WINDOWS_BASE_IMAGE --build-arg WINDOWS_BASE_IMAGE_TAG "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\contrib\busybox\" | Out-Host) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build busybox image" } Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Run the WCOW integration tests unless SKIP_INTEGRATION_TESTS is defined if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running integration tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" # Location of the daemon under test. $env:OrigDOCKER_HOST="$env:DOCKER_HOST" #https://blogs.technet.microsoft.com/heyscriptingguy/2011/09/20/solve-problems-with-external-command-lines-in-powershell/ is useful to see tokenising $jsonFilePath = "..\\bundles\\go-test-report-intcli-tests.json" $xmlFilePath = "..\\bundles\\junit-report-intcli-tests.xml" $c = "gotestsum --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " if ($null -ne $env:INTEGRATION_TEST_NAME) { # Makes is quicker for debugging to be able to run only a subset of the integration tests $c += "`"-test.run`" " $c += "`"$env:INTEGRATION_TEST_NAME`" " Write-Host -ForegroundColor Magenta "WARN: Only running integration tests matching $env:INTEGRATION_TEST_NAME" } $c += "`"-tags`" " + "`"autogen`" " $c += "`"-test.timeout`" " + "`"200m`" " if ($null -ne $env:INTEGRATION_IN_CONTAINER) { Write-Host -ForegroundColor Green "INFO: Integration tests being run inside a container" # Note we talk back through the containers gateway address # And the ridiculous lengths we have to go to get the default gateway address... (GetNetIPConfiguration doesn't work in nanoserver) # I just could not get the escaping to work in a single command, so output $c to a file and run that in the container instead... # Not the prettiest, but it works. $c | Out-File -Force "$env:TEMP\binary\runIntegrationCLI.ps1" $Duration= $(Measure-Command { & docker run ` --rm ` -e c=$c ` --workdir "c`:\gopath\src\github.com\docker\docker\integration-cli" ` -v "$env:TEMP\binary`:c:\target" ` docker ` "`$env`:PATH`='c`:\target;'+`$env:PATH`; `$env:DOCKER_HOST`='tcp`://'+(ipconfig | select -last 1).Substring(39)+'`:2357'; c:\target\runIntegrationCLI.ps1" | Out-Host } ) } else { $env:DOCKER_HOST=$DASHH_CUT $env:PATH="$env:TEMP\binary;$env:PATH;" # Force to use the test binaries, not the host ones. $env:GO111MODULE="off" Write-Host -ForegroundColor Green "INFO: DOCKER_HOST at $DASHH_CUT" $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Cyan "INFO: Integration API tests being run from the host:" $start=(Get-Date); Invoke-Expression ".\hack\make.ps1 -TestIntegration"; $Duration=New-Timespan -Start $start -End (Get-Date) $IntTestsRunResult = $LastExitCode $ErrorActionPreference = "Stop" if (-not($IntTestsRunResult -eq 0)) { Throw "ERROR: Integration API tests failed at $(Get-Date). Duration`:$Duration" } $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Green "INFO: Integration CLI tests being run from the host:" Write-Host -ForegroundColor Green "INFO: $c" Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\integration-cli" # Explicit to not use measure-command otherwise don't get output as it goes $start=(Get-Date); Invoke-Expression $c; $Duration=New-Timespan -Start $start -End (Get-Date) } $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Integration CLI tests failed at $(Get-Date). Duration`:$Duration" } Write-Host -ForegroundColor Green "INFO: Integration tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping integration tests" } # Docker info now to get counts (after or if jjh/containercounts is merged) if ($daemonStarted -eq 1) { Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test at end of run" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Stop the daemon under test if (Test-Path "$env:TEMP\docker.pid") { $p=Get-Content "$env:TEMP\docker.pid" -raw if (($null -ne $p) -and ($daemonStarted -eq 1)) { Write-Host -ForegroundColor green "INFO: Stopping daemon under test" taskkill -f -t -pid $p #sleep 5 } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } # Stop the tail process (if started) if ($null -ne $tail) { Write-Host -ForegroundColor green "INFO: Stop tailing logs of the daemon under tests" Stop-Process -InputObject $tail -Force } Write-Host -ForegroundColor Green "INFO: executeCI.ps1 Completed successfully at $(Get-Date)." } Catch [Exception] { $FinallyColour="Red" Write-Host -ForegroundColor Red ("`r`n`r`nERROR: Failed '$_' at $(Get-Date)") Write-Host -ForegroundColor Red ($_.InvocationInfo.PositionMessage) Write-Host "`n`n" # Exit to ensure Jenkins captures it. Don't do this in the ISE or interactive Powershell - they will catch the Throw onwards. if ( ([bool]([Environment]::GetCommandLineArgs() -Like '*-NonInteractive*')) -and ` ([bool]([Environment]::GetCommandLineArgs() -NotLike "*Powershell_ISE.exe*"))) { exit 1 } Throw $_ } Finally { $ErrorActionPreference="SilentlyContinue" $global:ProgressPreference=$origProgressPreference Write-Host -ForegroundColor Green "INFO: Tidying up at end of run" # Restore the path if ($null -ne $origPath) { $env:PATH=$origPath } # Restore the DOCKER_HOST if ($null -ne $origDOCKER_HOST) { $env:DOCKER_HOST=$origDOCKER_HOST } # Restore the GOROOT and GOPATH variables if ($null -ne $origGOROOT) { $env:GOROOT=$origGOROOT } if ($null -ne $origGOPATH) { $env:GOPATH=$origGOPATH } # Dump the daemon log. This will include any possible panic stack in the .err. if (($daemonStarted -eq 1) -and ($(Get-Item "$env:TEMP\dut.err").Length -gt 0)) { Write-Host -ForegroundColor Cyan "----------- DAEMON LOG ------------" Get-Content "$env:TEMP\dut.err" -ErrorAction SilentlyContinue | Write-Host -ForegroundColor Cyan Write-Host -ForegroundColor Cyan "----------- END DAEMON LOG --------" } # Save the daemon under test log if ($daemonStarted -eq 1) { Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.out) to bundles\CIDUT.out" Copy-Item "$env:TEMP\dut.out" "bundles\CIDUT.out" -Force -ErrorAction SilentlyContinue Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.err) to bundles\CIDUT.err" Copy-Item "$env:TEMP\dut.err" "bundles\CIDUT.err" -Force -ErrorAction SilentlyContinue } Set-Location "$env:SOURCES_DRIVE\$env:SOURCES_SUBDIR" -ErrorAction SilentlyContinue Nuke-Everything # Restore the TEMP path if ($null -ne $TEMPORIG) { $env:TEMP="$TEMPORIG" } $Dur=New-TimeSpan -Start $StartTime -End $(Get-Date) Write-Host -ForegroundColor $FinallyColour "`nINFO: executeCI.ps1 exiting at $(date). Duration $dur`n" }
# WARNING: When editing this file, consider submitting a PR to # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/executeCI.ps1, and make sure that # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1 isn't broken. # Validate using a test context in Jenkins, then copy/paste into Jenkins production. # # Jenkins CI scripts for Windows to Windows CI (Powershell Version) # By John Howard (@jhowardmsft) January 2016 - bash version; July 2016 Ported to PowerShell $ErrorActionPreference = 'Stop' $StartTime=Get-Date Write-Host -ForegroundColor Red "DEBUG: print all environment variables to check how Jenkins runs this script" $allArgs = [Environment]::GetCommandLineArgs() Write-Host -ForegroundColor Red $allArgs Write-Host -ForegroundColor Red "----------------------------------------------------------------------------" # ------------------------------------------------------------------------------------------- # When executed, we rely on four variables being set in the environment: # # [The reason for being environment variables rather than parameters is historical. No reason # why it couldn't be updated.] # # SOURCES_DRIVE is the drive on which the sources being tested are cloned from. # This should be a straight drive letter, no platform semantics. # For example 'c' # # SOURCES_SUBDIR is the top level directory under SOURCES_DRIVE where the # sources are cloned to. There are no platform semantics in this # as it does not include slashes. # For example 'gopath' # # Based on the above examples, it would be expected that Jenkins # would clone the sources being tested to # SOURCES_DRIVE\SOURCES_SUBDIR\src\github.com\docker\docker, or # c:\gopath\src\github.com\docker\docker # # TESTRUN_DRIVE is the drive where we build the binary on and redirect everything # to for the daemon under test. On an Azure D2 type host which has # an SSD temporary storage D: drive, this is ideal for performance. # For example 'd' # # TESTRUN_SUBDIR is the top level directory under TESTRUN_DRIVE where we redirect # everything to for the daemon under test. For example 'CI'. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\CI-<CommitID> or # d:\CI\CI-<CommitID> # # Optional environment variables help in CI: # # BUILD_NUMBER + BRANCH_NAME are optional variables to be added to the directory below TESTRUN_SUBDIR # to have individual folder per CI build. If some files couldn't be # cleaned up and we want to re-run the build in CI. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\PR-<PR-Number>\<BuildNumber> or # d:\CI\PR-<PR-Number>\<BuildNumber> # # In addition, the following variables can control the run configuration: # # DOCKER_DUT_DEBUG if defined starts the daemon under test in debug mode. # # DOCKER_STORAGE_OPTS comma-separated list of optional storage driver options for the daemon under test # examples: # DOCKER_STORAGE_OPTS="size=40G" # # SKIP_VALIDATION_TESTS if defined skips the validation tests # # SKIP_UNIT_TESTS if defined skips the unit tests # # SKIP_INTEGRATION_TESTS if defined skips the integration tests # # SKIP_COPY_GO if defined skips copy the go installer from the image # # DOCKER_DUT_HYPERV if default daemon under test default isolation is hyperv # # INTEGRATION_TEST_NAME to only run partial tests eg "TestInfo*" will only run # any tests starting "TestInfo" # # SKIP_BINARY_BUILD if defined skips building the binary # # SKIP_ZAP_DUT if defined doesn't zap the daemon under test directory # # SKIP_IMAGE_BUILD if defined doesn't build the 'docker' image # # INTEGRATION_IN_CONTAINER if defined, runs the integration tests from inside a container. # As of July 2016, there are known issues with this. # # SKIP_ALL_CLEANUP if defined, skips any cleanup at the start or end of the run # # WINDOWS_BASE_IMAGE if defined, uses that as the base image. Note that the # docker integration tests are also coded to use the same # environment variable, and if no set, defaults to microsoft/windowsservercore # # WINDOWS_BASE_IMAGE_TAG if defined, uses that as the tag name for the base image. # if no set, defaults to latest # # ------------------------------------------------------------------------------------------- # # Jenkins Integration. Add a Windows Powershell build step as follows: # # Write-Host -ForegroundColor green "INFO: Jenkins build step starting" # $CISCRIPT_DEFAULT_LOCATION = "https://raw.githubusercontent.com/moby/moby/master/hack/ci/windows.ps1" # $CISCRIPT_LOCAL_LOCATION = "$env:TEMP\executeCI.ps1" # Write-Host -ForegroundColor green "INFO: Removing cached execution script" # Remove-Item $CISCRIPT_LOCAL_LOCATION -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # $wc = New-Object net.webclient # try { # Write-Host -ForegroundColor green "INFO: Downloading latest execution script..." # $wc.Downloadfile($CISCRIPT_DEFAULT_LOCATION, $CISCRIPT_LOCAL_LOCATION) # } # catch [System.Net.WebException] # { # Throw ("Failed to download: $_") # } # & $CISCRIPT_LOCAL_LOCATION # ------------------------------------------------------------------------------------------- $SCRIPT_VER="05-Feb-2019 09:03 PDT" $FinallyColour="Cyan" #$env:DOCKER_DUT_DEBUG="yes" # Comment out to not be in debug mode #$env:SKIP_UNIT_TESTS="yes" #$env:SKIP_VALIDATION_TESTS="yes" #$env:SKIP_ZAP_DUT="" #$env:SKIP_BINARY_BUILD="yes" #$env:INTEGRATION_TEST_NAME="" #$env:SKIP_IMAGE_BUILD="yes" #$env:SKIP_ALL_CLEANUP="yes" #$env:INTEGRATION_IN_CONTAINER="yes" #$env:WINDOWS_BASE_IMAGE="" #$env:SKIP_COPY_GO="yes" #$env:INTEGRATION_TESTFLAGS="-test.v" Function Nuke-Everything { $ErrorActionPreference = 'SilentlyContinue' try { if ($null -eq $env:SKIP_ALL_CLEANUP) { Write-Host -ForegroundColor green "INFO: Nuke-Everything..." $containerCount = ($(docker ps -aq | Measure-Object -line).Lines) if (-not $LastExitCode -eq 0) { Throw "ERROR: Failed to get container count from control daemon while nuking" } Write-Host -ForegroundColor green "INFO: Container count on control daemon to delete is $containerCount" if ($(docker ps -aq | Measure-Object -line).Lines -gt 0) { docker rm -f $(docker ps -aq) } $allImages = $(docker images --format "{{.Repository}}#{{.ID}}") $toRemove = ($allImages | Select-String -NotMatch "servercore","nanoserver","docker","busybox") $imageCount = ($toRemove | Measure-Object -line).Lines if ($imageCount -gt 0) { Write-Host -Foregroundcolor green "INFO: Non-base image count on control daemon to delete is $imageCount" docker rmi -f ($toRemove | Foreach-Object { $_.ToString().Split("#")[1] }) } } else { Write-Host -ForegroundColor Magenta "WARN: Skipping cleanup of images and containers" } # Kill any spurious daemons. The '-' is IMPORTANT otherwise will kill the control daemon! $pids=$(get-process | where-object {$_.ProcessName -like 'dockerd-*'}).id foreach ($p in $pids) { Write-Host "INFO: Killing daemon with PID $p" Stop-Process -Id $p -Force -ErrorAction SilentlyContinue } if ($null -ne $pidFile) { Write-Host "INFO: Tidying pidfile $pidfile" if (Test-Path $pidFile) { $p=Get-Content $pidFile -raw if ($null -ne $p){ Write-Host -ForegroundColor green "INFO: Stopping possible daemon pid $p" taskkill -f -t -pid $p } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } } Stop-Process -name "cc1" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "link" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "compile" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "ld" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "go" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git-remote-https" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "integration-cli.test" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "tail" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # Detach any VHDs gwmi msvm_mountedstorageimage -namespace root/virtualization/v2 -ErrorAction SilentlyContinue | foreach-Object {$_.DetachVirtualHardDisk() } # Stop any compute processes Get-ComputeProcess | Stop-ComputeProcess -Force # Delete the directory using our dangerous utility unless told not to if (Test-Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR") { if (($null -ne $env:SKIP_ZAP_DUT) -or ($null -eq $env:SKIP_ALL_CLEANUP)) { Write-Host -ForegroundColor Green "INFO: Nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" docker-ci-zap "-folder=$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } else { Write-Host -ForegroundColor Magenta "WARN: Skip nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 Production Server workaround - Psched $reg = "HKLM:\System\CurrentControlSet\Services\Psched\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under Psched\Parameters" Write-Warning "Cleaning Psched..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 $reg = "HKLM:\System\CurrentControlSet\Services\WFPLWFS\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under WFPLWFS\Parameters" Write-Warning "Cleaning WFPLWFS..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } } catch { # Don't throw any errors onwards Throw $_ } } Try { Write-Host -ForegroundColor Cyan "`nINFO: executeCI.ps1 starting at $(date)`n" Write-Host -ForegroundColor Green "INFO: Script version $SCRIPT_VER" Set-PSDebug -Trace 0 # 1 to turn on $origPath="$env:PATH" # so we can restore it at the end $origDOCKER_HOST="$DOCKER_HOST" # So we can restore it at the end $origGOROOT="$env:GOROOT" # So we can restore it at the end $origGOPATH="$env:GOPATH" # So we can restore it at the end # Turn off progress bars $origProgressPreference=$global:ProgressPreference $global:ProgressPreference='SilentlyContinue' # Git version Write-Host -ForegroundColor Green "INFO: Running $(git version)" # OS Version $bl=(Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name BuildLabEx).BuildLabEx $a=$bl.ToString().Split(".") $Branch=$a[3] $WindowsBuild=$a[0]+"."+$a[1]+"."+$a[4] Write-Host -ForegroundColor green "INFO: Branch:$Branch Build:$WindowsBuild" # List the environment variables Write-Host -ForegroundColor green "INFO: Environment variables:" Get-ChildItem Env: | Out-String # PR if (-not ($null -eq $env:PR)) { Write-Output "INFO: PR#$env:PR (https://github.com/docker/docker/pull/$env:PR)" } # Make sure docker is installed if ($null -eq (Get-Command "docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker is not installed or not found on path" } # Make sure docker-ci-zap is installed if ($null -eq (Get-Command "docker-ci-zap" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker-ci-zap is not installed or not found on path" } # Make sure Windows Defender is disabled $defender = $false Try { $status = Get-MpComputerStatus if ($status) { if ($status.RealTimeProtectionEnabled) { $defender = $true } } } Catch {} if ($defender) { Write-Host -ForegroundColor Magenta "WARN: Windows Defender real time protection is enabled, which may cause some integration tests to fail" } # Make sure SOURCES_DRIVE is set if ($null -eq $env:SOURCES_DRIVE) { Throw "ERROR: Environment variable SOURCES_DRIVE is not set" } # Make sure TESTRUN_DRIVE is set if ($null -eq $env:TESTRUN_DRIVE) { Throw "ERROR: Environment variable TESTRUN_DRIVE is not set" } # Make sure SOURCES_SUBDIR is set if ($null -eq $env:SOURCES_SUBDIR) { Throw "ERROR: Environment variable SOURCES_SUBDIR is not set" } # Make sure TESTRUN_SUBDIR is set if ($null -eq $env:TESTRUN_SUBDIR) { Throw "ERROR: Environment variable TESTRUN_SUBDIR is not set" } # SOURCES_DRIVE\SOURCES_SUBDIR must be a directory and exist if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR")) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR must be an existing directory" } # Create the TESTRUN_DRIVE\TESTRUN_SUBDIR if it does not already exist New-Item -ItemType Directory -Force -Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" -ErrorAction SilentlyContinue | Out-Null Write-Host -ForegroundColor Green "INFO: Sources under $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\..." Write-Host -ForegroundColor Green "INFO: Test run under $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\..." # Check the intended source location is a directory if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker is not a directory!" } # Make sure we start at the root of the sources Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Running in $(Get-Location)" # Make sure we are in repo if (-not (Test-Path -PathType Leaf -Path ".\Dockerfile.windows")) { Throw "$(Get-Location) does not contain Dockerfile.windows!" } Write-Host -ForegroundColor Green "INFO: docker/docker repository was found" # Make sure microsoft/windowsservercore:latest image is installed in the control daemon. On public CI machines, windowsservercore.tar and nanoserver.tar # are pre-baked and tagged appropriately in the c:\baseimages directory, and can be directly loaded. # Note - this script will only work on 10B (Oct 2016) or later machines! Not 9D or previous due to image tagging assumptions. # # On machines not on Microsoft corpnet, or those which have not been pre-baked, we have to docker pull the image in which case it will # will come in directly as microsoft/windowsservercore:latest. The ultimate goal of all this code is to ensure that whatever, # we have microsoft/windowsservercore:latest # # Note we cannot use (as at Oct 2016) nanoserver as the control daemons base image, even if nanoserver is used in the tests themselves. $ErrorActionPreference = "SilentlyContinue" $ControlDaemonBaseImage="windowsservercore" $readBaseFrom="c" if ($((docker images --format "{{.Repository}}:{{.Tag}}" | Select-String $("microsoft/"+$ControlDaemonBaseImage+":latest") | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("$env:SOURCES_DRIVE`:\baseimages\"+$ControlDaemonBaseImage+".tar")) { # An optimization for CI servers to copy it to the D: drive which is an SSD. if ($env:SOURCES_DRIVE -ne $env:TESTRUN_DRIVE) { $readBaseFrom=$env:TESTRUN_DRIVE if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages")) { New-Item "$env:TESTRUN_DRIVE`:\baseimages" -type directory | Out-Null } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\windowsservercore.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\nanoserver.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\nanoserver.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } $readBaseFrom=$env:TESTRUN_DRIVE } Write-Host -ForegroundColor Green "INFO: Loading"$ControlDaemonBaseImage".tar from disk. This may take some time..." $ErrorActionPreference = "SilentlyContinue" docker load -i $("$readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") } Write-Host -ForegroundColor Green "INFO: docker load of"$ControlDaemonBaseImage" completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:latest Write-Host -ForegroundColor Green $("INFO: Pulling $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG from docker hub. This may take some time...") $ErrorActionPreference = "SilentlyContinue" docker pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage") docker tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image"$("microsoft/"+$ControlDaemonBaseImage+":latest")"is already loaded in the control daemon" } # Inspect the pulled image to get the version directly $ErrorActionPreference = "SilentlyContinue" $imgVersion = $(docker inspect $("microsoft/"+$ControlDaemonBaseImage) --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of microsoft/"+$ControlDaemonBaseImage+":latest is '"+$imgVersion+"'") # Provide the docker version for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker version $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Write-Host Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host -ForegroundColor Green " Failed to get a response from the control daemon. It may be down." Write-Host -ForegroundColor Green " Try re-running this CI job, or ask on #docker-maintainers on docker slack" Write-Host -ForegroundColor Green " to see if the daemon is running. Also check the service configuration." Write-Host -ForegroundColor Green " DOCKER_HOST is set to $DOCKER_HOST." Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Same as above, but docker info Write-Host -ForegroundColor Green "INFO: Docker info of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker info $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Get the commit has and verify we have something $ErrorActionPreference = "SilentlyContinue" $COMMITHASH=$(git rev-parse --short HEAD) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" } Write-Host -ForegroundColor Green "INFO: Commit hash is $COMMITHASH" # Nuke everything and go back to our sources after Nuke-Everything cd "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" # Redirect to a temporary location. $TEMPORIG=$env:TEMP if ($null -eq $env:BUILD_NUMBER) { $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\CI-$COMMITHASH" } else { # individual temporary location per CI build that better matches the BUILD_URL $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\$env:BRANCH_NAME\$env:BUILD_NUMBER" } $env:LOCALAPPDATA="$env:TEMP\localappdata" $errorActionPreference='Stop' New-Item -ItemType Directory "$env:TEMP" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\userprofile" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults\unittests" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\localappdata" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\binary" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\installer" -ErrorAction SilentlyContinue | Out-Null if ($null -eq $env:SKIP_COPY_GO) { # Wipe the previous version of GO - we're going to get it out of the image if (Test-Path "$env:TEMP\go") { Remove-Item "$env:TEMP\go" -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } New-Item -ItemType Directory "$env:TEMP\go" -ErrorAction SilentlyContinue | Out-Null } Write-Host -ForegroundColor Green "INFO: Location for testing is $env:TEMP" # CI Integrity check - ensure Dockerfile.windows and Dockerfile go versions match $goVersionDockerfileWindows=(Select-String -Path ".\Dockerfile.windows" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value $goVersionDockerfile=(Select-String -Path ".\Dockerfile" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value if ($null -eq $goVersionDockerfile) { Throw "ERROR: Failed to extract golang version from Dockerfile" } Write-Host -ForegroundColor Green "INFO: Validating GOLang consistency in Dockerfile.windows..." if (-not ($goVersionDockerfile -eq $goVersionDockerfileWindows)) { Throw "ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. $goVersionDockerfile $goVersionDockerfileWindows" } # Build the image if ($null -eq $env:SKIP_IMAGE_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the image from Dockerfile.windows at $(Get-Date)..." Write-Host $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { docker build --build-arg=GO_VERSION -t docker -f Dockerfile.windows . | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build image from Dockerfile.windows" } Write-Host -ForegroundColor Green "INFO: Image build ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the docker image" } # Following at the moment must be docker\docker as it's dictated by dockerfile.Windows $contPath="$COMMITHASH`:c`:\gopath\src\github.com\docker\docker\bundles" # After https://github.com/docker/docker/pull/30290, .git was added to .dockerignore. Therefore # we have to calculate unsupported outside of the container, and pass the commit ID in through # an environment variable for the binary build $CommitUnsupported="" if ($(git status --porcelain --untracked-files=no).Length -ne 0) { $CommitUnsupported="-unsupported" } # Build the binary in a container unless asked to skip it. if ($null -eq $env:SKIP_BINARY_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the test binaries at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" docker rm -f $COMMITHASH 2>&1 | Out-Null if ($CommitUnsupported -ne "") { Write-Host "" Write-Warning "This version is unsupported because there are uncommitted file(s)." Write-Warning "Either commit these changes, or add them to .gitignore." git status --porcelain --untracked-files=no | Write-Warning Write-Host "" } $Duration=$(Measure-Command {docker run --name $COMMITHASH -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -Daemon -Client | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build binary" } Write-Host -ForegroundColor Green "INFO: Binaries build ended at $(Get-Date). Duration`:$Duration" # Copy the binaries and the generated version_autogen.go out of the container $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\docker.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the client binary (docker.exe) to $env:TEMP\binary" } docker cp "$contPath\dockerd.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the daemon binary (dockerd.exe) to $env:TEMP\binary" } docker cp "$COMMITHASH`:c`:\gopath\bin\gotestsum.exe" $env:TEMP\binary\ if (-not (Test-Path "$env:TEMP\binary\gotestsum.exe")) { Throw "ERROR: gotestsum.exe not found...." ` } $ErrorActionPreference = "Stop" # Copy the built dockerd.exe to dockerd-$COMMITHASH.exe so that easily spotted in task manager. Write-Host -ForegroundColor Green "INFO: Copying the built daemon binary to $env:TEMP\binary\dockerd-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\dockerd.exe $env:TEMP\binary\dockerd-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue # Copy the built docker.exe to docker-$COMMITHASH.exe Write-Host -ForegroundColor Green "INFO: Copying the built client binary to $env:TEMP\binary\docker-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\docker.exe $env:TEMP\binary\docker-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the binaries" } Write-Host -ForegroundColor Green "INFO: Copying dockerversion from the container..." $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\..\dockerversion\version_autogen.go" "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the generated version_autogen.go to $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" } $ErrorActionPreference = "Stop" # Grab the golang installer out of the built image. That way, we know we are consistent once extracted and paths set, # so there's no need to re-deploy on account of an upgrade to the version of GO being used in docker. if ($null -eq $env:SKIP_COPY_GO) { Write-Host -ForegroundColor Green "INFO: Copying the golang package from the container to $env:TEMP\installer\go.zip..." docker cp "$COMMITHASH`:c`:\go.zip" $env:TEMP\installer\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the golang installer 'go.zip' from container:c:\go.zip to $env:TEMP\installer" } $ErrorActionPreference = "Stop" # Extract the golang installer Write-Host -ForegroundColor Green "INFO: Extracting go.zip to $env:TEMP\go" $Duration=$(Measure-Command { Expand-Archive $env:TEMP\installer\go.zip $env:TEMP -Force | Out-Null}) Write-Host -ForegroundColor Green "INFO: Extraction ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping copying and extracting golang from the image" } # Set the GOPATH Write-Host -ForegroundColor Green "INFO: Updating the golang and path environment variables" $env:GOPATH="$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR" Write-Host -ForegroundColor Green "INFO: GOPATH=$env:GOPATH" # Set the path to have the version of go from the image at the front $env:PATH="$env:TEMP\go\bin;$env:PATH" # Set the GOROOT to be our copy of go from the image $env:GOROOT="$env:TEMP\go" Write-Host -ForegroundColor Green "INFO: $(go version)" # Work out the -H parameter for the daemon under test (DASHH_DUT) and client under test (DASHH_CUT) #$DASHH_DUT="npipe:////./pipe/$COMMITHASH" # Can't do remote named pipe #$ip = (resolve-dnsname $env:COMPUTERNAME -type A -NoHostsFile -LlmnrNetbiosOnly).IPAddress # Useful to tie down $DASHH_CUT="tcp://127.0.0.1`:2357" # Not a typo for 2375! $DASHH_DUT="tcp://0.0.0.0:2357" # Not a typo for 2375! # Arguments for the daemon under test $dutArgs=@() $dutArgs += "-H $DASHH_DUT" $dutArgs += "--data-root $env:TEMP\daemon" $dutArgs += "--pidfile $env:TEMP\docker.pid" # Save the PID file so we can nuke it if set $pidFile="$env:TEMP\docker.pid" # Arguments: Are we starting the daemon under test in debug mode? if (-not ("$env:DOCKER_DUT_DEBUG" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test in debug mode" $dutArgs += "-D" } # Arguments: Are we starting the daemon under test with Hyper-V containers as the default isolation? if (-not ("$env:DOCKER_DUT_HYPERV" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with Hyper-V containers as the default" $dutArgs += "--exec-opt isolation=hyperv" } # Arguments: Allow setting optional storage-driver options # example usage: DDOCKER_STORAGE_OPTS="size=40G" if (-not ("$env:DOCKER_STORAGE_OPTS" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with storage-driver options ${env:DOCKER_STORAGE_OPTS}" $env:DOCKER_STORAGE_OPTS.Split(",") | ForEach-Object { $dutArgs += "--storage-opt $_" } } # Start the daemon under test, ensuring everything is redirected to folders under $TEMP. # Important - we launch the -$COMMITHASH version so that we can kill it without # killing the control daemon. Write-Host -ForegroundColor Green "INFO: Starting a daemon under test..." Write-Host -ForegroundColor Green "INFO: Args: $dutArgs" New-Item -ItemType Directory $env:TEMP\daemon -ErrorAction SilentlyContinue | Out-Null # Cannot fathom why, but always writes to stderr.... Start-Process "$env:TEMP\binary\dockerd-$COMMITHASH" ` -ArgumentList $dutArgs ` -RedirectStandardOutput "$env:TEMP\dut.out" ` -RedirectStandardError "$env:TEMP\dut.err" Write-Host -ForegroundColor Green "INFO: Process started successfully." $daemonStarted=1 # Start tailing the daemon under test if the command is installed if ($null -ne (Get-Command "tail" -ErrorAction SilentlyContinue)) { Write-Host -ForegroundColor green "INFO: Start tailing logs of the daemon under tests" $tail = Start-Process "tail" -ArgumentList "-f $env:TEMP\dut.out" -PassThru -ErrorAction SilentlyContinue } # Verify we can get the daemon under test to respond $tries=20 Write-Host -ForegroundColor Green "INFO: Waiting for the daemon under test to start..." while ($true) { $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version 2>&1 | Out-Null $ErrorActionPreference = "Stop" if ($LastExitCode -eq 0) { break } $tries-- if ($tries -le 0) { Throw "ERROR: Failed to get a response from the daemon under test" } Write-Host -NoNewline "." sleep 1 } Write-Host -ForegroundColor Green "INFO: Daemon under test started and replied!" # Provide the docker version of the daemon under test for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker info Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker images Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Default to windowsservercore for the base image used for the tests. The "docker" image # and the control daemon use microsoft/windowsservercore regardless. This is *JUST* for the tests. if ($null -eq $env:WINDOWS_BASE_IMAGE) { $env:WINDOWS_BASE_IMAGE="microsoft/windowsservercore" } if ($null -eq $env:WINDOWS_BASE_IMAGE_TAG) { $env:WINDOWS_BASE_IMAGE_TAG="latest" } # Lowercase and make sure it has a microsoft/ prefix $env:WINDOWS_BASE_IMAGE = $env:WINDOWS_BASE_IMAGE.ToLower() if (! $($env:WINDOWS_BASE_IMAGE -Split "/")[0] -match "microsoft") { Throw "ERROR: WINDOWS_BASE_IMAGE should start microsoft/ or mcr.microsoft.com/" } Write-Host -ForegroundColor Green "INFO: Base image for tests is $env:WINDOWS_BASE_IMAGE" $ErrorActionPreference = "SilentlyContinue" if ($((& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images --format "{{.Repository}}:{{.Tag}}" | Select-String "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("c:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar")) { Write-Host -ForegroundColor Green "INFO: Loading"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]".tar from disk into the daemon under test. This may take some time..." $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" load -i $("$readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar into daemon under test") } Write-Host -ForegroundColor Green "INFO: docker load of"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]" into daemon under test completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:tagname Write-Host -ForegroundColor Green $("INFO: Pulling "+$env:WINDOWS_BASE_IMAGE+":"+$env:WINDOWS_BASE_IMAGE_TAG+" from docker hub into daemon under test. This may take some time...") $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage in daemon under test") & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is already loaded in the daemon under test" } # Inspect the pulled or loaded image to get the version directly $ErrorActionPreference = "SilentlyContinue" $dutimgVersion = $(&"$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" inspect "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is '"+$dutimgVersion+"'") # Run the validation tests unless SKIP_VALIDATION_TESTS is defined. if ($null -eq $env:SKIP_VALIDATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running validation tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { hack\make.ps1 -DCO -GoFormat -PkgImports | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Validation tests failed" } Write-Host -ForegroundColor Green "INFO: Validation tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping validation tests" } # Run the unit tests inside a container unless SKIP_UNIT_TESTS is defined if ($null -eq $env:SKIP_UNIT_TESTS) { $ContainerNameForUnitTests = $COMMITHASH + "_UnitTests" Write-Host -ForegroundColor Cyan "INFO: Running unit tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command {docker run --name $ContainerNameForUnitTests -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -TestUnit | Out-Host }) $TestRunExitCode = $LastExitCode $ErrorActionPreference = "Stop" # Saving where jenkins will take a look at..... New-Item -Force -ItemType Directory bundles | Out-Null $unitTestsContPath="$ContainerNameForUnitTests`:c`:\gopath\src\github.com\docker\docker\bundles" $JunitExpectedContFilePath = "$unitTestsContPath\junit-report-unit-tests.xml" docker cp $JunitExpectedContFilePath "bundles" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the unit tests report ($JunitExpectedContFilePath) to bundles" } if (Test-Path "bundles\junit-report-unit-tests.xml") { Write-Host -ForegroundColor Magenta "INFO: Unit tests results(bundles\junit-report-unit-tests.xml) exist. pwd=$pwd" } else { Write-Host -ForegroundColor Magenta "ERROR: Unit tests results(bundles\junit-report-unit-tests.xml) do not exist. pwd=$pwd" } if (-not($TestRunExitCode -eq 0)) { Throw "ERROR: Unit tests failed" } Write-Host -ForegroundColor Green "INFO: Unit tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping unit tests" } # Add the Windows busybox image. Needed for WCOW integration tests if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Green "INFO: Building busybox" $ErrorActionPreference = "SilentlyContinue" $(& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" build -t busybox --build-arg WINDOWS_BASE_IMAGE --build-arg WINDOWS_BASE_IMAGE_TAG "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\contrib\busybox\" | Out-Host) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build busybox image" } Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Run the WCOW integration tests unless SKIP_INTEGRATION_TESTS is defined if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running integration tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" # Location of the daemon under test. $env:OrigDOCKER_HOST="$env:DOCKER_HOST" #https://blogs.technet.microsoft.com/heyscriptingguy/2011/09/20/solve-problems-with-external-command-lines-in-powershell/ is useful to see tokenising $jsonFilePath = "..\\bundles\\go-test-report-intcli-tests.json" $xmlFilePath = "..\\bundles\\junit-report-intcli-tests.xml" $c = "gotestsum --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " if ($null -ne $env:INTEGRATION_TEST_NAME) { # Makes is quicker for debugging to be able to run only a subset of the integration tests $c += "`"-test.run`" " $c += "`"$env:INTEGRATION_TEST_NAME`" " Write-Host -ForegroundColor Magenta "WARN: Only running integration tests matching $env:INTEGRATION_TEST_NAME" } $c += "`"-tags`" " + "`"autogen`" " $c += "`"-test.timeout`" " + "`"200m`" " if ($null -ne $env:INTEGRATION_IN_CONTAINER) { Write-Host -ForegroundColor Green "INFO: Integration tests being run inside a container" # Note we talk back through the containers gateway address # And the ridiculous lengths we have to go to get the default gateway address... (GetNetIPConfiguration doesn't work in nanoserver) # I just could not get the escaping to work in a single command, so output $c to a file and run that in the container instead... # Not the prettiest, but it works. $c | Out-File -Force "$env:TEMP\binary\runIntegrationCLI.ps1" $Duration= $(Measure-Command { & docker run ` --rm ` -e c=$c ` --workdir "c`:\gopath\src\github.com\docker\docker\integration-cli" ` -v "$env:TEMP\binary`:c:\target" ` docker ` "`$env`:PATH`='c`:\target;'+`$env:PATH`; `$env:DOCKER_HOST`='tcp`://'+(ipconfig | select -last 1).Substring(39)+'`:2357'; c:\target\runIntegrationCLI.ps1" | Out-Host } ) } else { $env:DOCKER_HOST=$DASHH_CUT $env:PATH="$env:TEMP\binary;$env:PATH;" # Force to use the test binaries, not the host ones. $env:GO111MODULE="off" Write-Host -ForegroundColor Green "INFO: DOCKER_HOST at $DASHH_CUT" $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Cyan "INFO: Integration API tests being run from the host:" $start=(Get-Date); Invoke-Expression ".\hack\make.ps1 -TestIntegration"; $Duration=New-Timespan -Start $start -End (Get-Date) $IntTestsRunResult = $LastExitCode $ErrorActionPreference = "Stop" if (-not($IntTestsRunResult -eq 0)) { Throw "ERROR: Integration API tests failed at $(Get-Date). Duration`:$Duration" } $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Green "INFO: Integration CLI tests being run from the host:" Write-Host -ForegroundColor Green "INFO: $c" Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\integration-cli" # Explicit to not use measure-command otherwise don't get output as it goes $start=(Get-Date); Invoke-Expression $c; $Duration=New-Timespan -Start $start -End (Get-Date) } $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Integration CLI tests failed at $(Get-Date). Duration`:$Duration" } Write-Host -ForegroundColor Green "INFO: Integration tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping integration tests" } # Docker info now to get counts (after or if jjh/containercounts is merged) if ($daemonStarted -eq 1) { Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test at end of run" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Stop the daemon under test if (Test-Path "$env:TEMP\docker.pid") { $p=Get-Content "$env:TEMP\docker.pid" -raw if (($null -ne $p) -and ($daemonStarted -eq 1)) { Write-Host -ForegroundColor green "INFO: Stopping daemon under test" taskkill -f -t -pid $p #sleep 5 } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } # Stop the tail process (if started) if ($null -ne $tail) { Write-Host -ForegroundColor green "INFO: Stop tailing logs of the daemon under tests" Stop-Process -InputObject $tail -Force } Write-Host -ForegroundColor Green "INFO: executeCI.ps1 Completed successfully at $(Get-Date)." } Catch [Exception] { $FinallyColour="Red" Write-Host -ForegroundColor Red ("`r`n`r`nERROR: Failed '$_' at $(Get-Date)") Write-Host -ForegroundColor Red ($_.InvocationInfo.PositionMessage) Write-Host "`n`n" # Exit to ensure Jenkins captures it. Don't do this in the ISE or interactive Powershell - they will catch the Throw onwards. if ( ([bool]([Environment]::GetCommandLineArgs() -Like '*-NonInteractive*')) -and ` ([bool]([Environment]::GetCommandLineArgs() -NotLike "*Powershell_ISE.exe*"))) { exit 1 } Throw $_ } Finally { $ErrorActionPreference="SilentlyContinue" $global:ProgressPreference=$origProgressPreference Write-Host -ForegroundColor Green "INFO: Tidying up at end of run" # Restore the path if ($null -ne $origPath) { $env:PATH=$origPath } # Restore the DOCKER_HOST if ($null -ne $origDOCKER_HOST) { $env:DOCKER_HOST=$origDOCKER_HOST } # Restore the GOROOT and GOPATH variables if ($null -ne $origGOROOT) { $env:GOROOT=$origGOROOT } if ($null -ne $origGOPATH) { $env:GOPATH=$origGOPATH } # Dump the daemon log. This will include any possible panic stack in the .err. if (($daemonStarted -eq 1) -and ($(Get-Item "$env:TEMP\dut.err").Length -gt 0)) { Write-Host -ForegroundColor Cyan "----------- DAEMON LOG ------------" Get-Content "$env:TEMP\dut.err" -ErrorAction SilentlyContinue | Write-Host -ForegroundColor Cyan Write-Host -ForegroundColor Cyan "----------- END DAEMON LOG --------" } # Save the daemon under test log if ($daemonStarted -eq 1) { Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.out) to bundles\CIDUT.out" Copy-Item "$env:TEMP\dut.out" "bundles\CIDUT.out" -Force -ErrorAction SilentlyContinue Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.err) to bundles\CIDUT.err" Copy-Item "$env:TEMP\dut.err" "bundles\CIDUT.err" -Force -ErrorAction SilentlyContinue } Set-Location "$env:SOURCES_DRIVE\$env:SOURCES_SUBDIR" -ErrorAction SilentlyContinue Nuke-Everything # Restore the TEMP path if ($null -ne $TEMPORIG) { $env:TEMP="$TEMPORIG" } $Dur=New-TimeSpan -Start $StartTime -End $(Get-Date) Write-Host -ForegroundColor $FinallyColour "`nINFO: executeCI.ps1 exiting at $(date). Duration $dur`n" }
thaJeztah
3ad9549e70bdf45b40c6332b221cd5c7fd635524
51b06c6795160d8a1ba05d05d6491df7588b2957
Why this change?
cpuguy83
4,518
moby/moby
42,683
Remove LCOW (step 6)
Splitting off more bits from https://github.com/moby/moby/pull/42170
null
2021-07-27 11:33:51+00:00
2021-07-29 18:34:29+00:00
hack/ci/windows.ps1
# WARNING: When editing this file, consider submitting a PR to # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/executeCI.ps1, and make sure that # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1 isn't broken. # Validate using a test context in Jenkins, then copy/paste into Jenkins production. # # Jenkins CI scripts for Windows to Windows CI (Powershell Version) # By John Howard (@jhowardmsft) January 2016 - bash version; July 2016 Ported to PowerShell $ErrorActionPreference = 'Stop' $StartTime=Get-Date Write-Host -ForegroundColor Red "DEBUG: print all environment variables to check how Jenkins runs this script" $allArgs = [Environment]::GetCommandLineArgs() Write-Host -ForegroundColor Red $allArgs Write-Host -ForegroundColor Red "----------------------------------------------------------------------------" # ------------------------------------------------------------------------------------------- # When executed, we rely on four variables being set in the environment: # # [The reason for being environment variables rather than parameters is historical. No reason # why it couldn't be updated.] # # SOURCES_DRIVE is the drive on which the sources being tested are cloned from. # This should be a straight drive letter, no platform semantics. # For example 'c' # # SOURCES_SUBDIR is the top level directory under SOURCES_DRIVE where the # sources are cloned to. There are no platform semantics in this # as it does not include slashes. # For example 'gopath' # # Based on the above examples, it would be expected that Jenkins # would clone the sources being tested to # SOURCES_DRIVE\SOURCES_SUBDIR\src\github.com\docker\docker, or # c:\gopath\src\github.com\docker\docker # # TESTRUN_DRIVE is the drive where we build the binary on and redirect everything # to for the daemon under test. On an Azure D2 type host which has # an SSD temporary storage D: drive, this is ideal for performance. # For example 'd' # # TESTRUN_SUBDIR is the top level directory under TESTRUN_DRIVE where we redirect # everything to for the daemon under test. For example 'CI'. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\CI-<CommitID> or # d:\CI\CI-<CommitID> # # Optional environment variables help in CI: # # BUILD_NUMBER + BRANCH_NAME are optional variables to be added to the directory below TESTRUN_SUBDIR # to have individual folder per CI build. If some files couldn't be # cleaned up and we want to re-run the build in CI. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\PR-<PR-Number>\<BuildNumber> or # d:\CI\PR-<PR-Number>\<BuildNumber> # # In addition, the following variables can control the run configuration: # # DOCKER_DUT_DEBUG if defined starts the daemon under test in debug mode. # # DOCKER_STORAGE_OPTS comma-separated list of optional storage driver options for the daemon under test # examples: # DOCKER_STORAGE_OPTS="size=40G" # DOCKER_STORAGE_OPTS="lcow.globalmode=false,lcow.kernel=kernel.efi" # # SKIP_VALIDATION_TESTS if defined skips the validation tests # # SKIP_UNIT_TESTS if defined skips the unit tests # # SKIP_INTEGRATION_TESTS if defined skips the integration tests # # SKIP_COPY_GO if defined skips copy the go installer from the image # # DOCKER_DUT_HYPERV if default daemon under test default isolation is hyperv # # INTEGRATION_TEST_NAME to only run partial tests eg "TestInfo*" will only run # any tests starting "TestInfo" # # SKIP_BINARY_BUILD if defined skips building the binary # # SKIP_ZAP_DUT if defined doesn't zap the daemon under test directory # # SKIP_IMAGE_BUILD if defined doesn't build the 'docker' image # # INTEGRATION_IN_CONTAINER if defined, runs the integration tests from inside a container. # As of July 2016, there are known issues with this. # # SKIP_ALL_CLEANUP if defined, skips any cleanup at the start or end of the run # # WINDOWS_BASE_IMAGE if defined, uses that as the base image. Note that the # docker integration tests are also coded to use the same # environment variable, and if no set, defaults to microsoft/windowsservercore # # WINDOWS_BASE_IMAGE_TAG if defined, uses that as the tag name for the base image. # if no set, defaults to latest # # ------------------------------------------------------------------------------------------- # # Jenkins Integration. Add a Windows Powershell build step as follows: # # Write-Host -ForegroundColor green "INFO: Jenkins build step starting" # $CISCRIPT_DEFAULT_LOCATION = "https://raw.githubusercontent.com/moby/moby/master/hack/ci/windows.ps1" # $CISCRIPT_LOCAL_LOCATION = "$env:TEMP\executeCI.ps1" # Write-Host -ForegroundColor green "INFO: Removing cached execution script" # Remove-Item $CISCRIPT_LOCAL_LOCATION -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # $wc = New-Object net.webclient # try { # Write-Host -ForegroundColor green "INFO: Downloading latest execution script..." # $wc.Downloadfile($CISCRIPT_DEFAULT_LOCATION, $CISCRIPT_LOCAL_LOCATION) # } # catch [System.Net.WebException] # { # Throw ("Failed to download: $_") # } # & $CISCRIPT_LOCAL_LOCATION # ------------------------------------------------------------------------------------------- $SCRIPT_VER="05-Feb-2019 09:03 PDT" $FinallyColour="Cyan" #$env:DOCKER_DUT_DEBUG="yes" # Comment out to not be in debug mode #$env:SKIP_UNIT_TESTS="yes" #$env:SKIP_VALIDATION_TESTS="yes" #$env:SKIP_ZAP_DUT="" #$env:SKIP_BINARY_BUILD="yes" #$env:INTEGRATION_TEST_NAME="" #$env:SKIP_IMAGE_BUILD="yes" #$env:SKIP_ALL_CLEANUP="yes" #$env:INTEGRATION_IN_CONTAINER="yes" #$env:WINDOWS_BASE_IMAGE="" #$env:SKIP_COPY_GO="yes" #$env:INTEGRATION_TESTFLAGS="-test.v" Function Nuke-Everything { $ErrorActionPreference = 'SilentlyContinue' try { if ($null -eq $env:SKIP_ALL_CLEANUP) { Write-Host -ForegroundColor green "INFO: Nuke-Everything..." $containerCount = ($(docker ps -aq | Measure-Object -line).Lines) if (-not $LastExitCode -eq 0) { Throw "ERROR: Failed to get container count from control daemon while nuking" } Write-Host -ForegroundColor green "INFO: Container count on control daemon to delete is $containerCount" if ($(docker ps -aq | Measure-Object -line).Lines -gt 0) { docker rm -f $(docker ps -aq) } $allImages = $(docker images --format "{{.Repository}}#{{.ID}}") $toRemove = ($allImages | Select-String -NotMatch "servercore","nanoserver","docker","busybox") $imageCount = ($toRemove | Measure-Object -line).Lines if ($imageCount -gt 0) { Write-Host -Foregroundcolor green "INFO: Non-base image count on control daemon to delete is $imageCount" docker rmi -f ($toRemove | Foreach-Object { $_.ToString().Split("#")[1] }) } } else { Write-Host -ForegroundColor Magenta "WARN: Skipping cleanup of images and containers" } # Kill any spurious daemons. The '-' is IMPORTANT otherwise will kill the control daemon! $pids=$(get-process | where-object {$_.ProcessName -like 'dockerd-*'}).id foreach ($p in $pids) { Write-Host "INFO: Killing daemon with PID $p" Stop-Process -Id $p -Force -ErrorAction SilentlyContinue } if ($null -ne $pidFile) { Write-Host "INFO: Tidying pidfile $pidfile" if (Test-Path $pidFile) { $p=Get-Content $pidFile -raw if ($null -ne $p){ Write-Host -ForegroundColor green "INFO: Stopping possible daemon pid $p" taskkill -f -t -pid $p } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } } Stop-Process -name "cc1" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "link" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "compile" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "ld" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "go" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git-remote-https" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "integration-cli.test" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "tail" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # Detach any VHDs gwmi msvm_mountedstorageimage -namespace root/virtualization/v2 -ErrorAction SilentlyContinue | foreach-object {$_.DetachVirtualHardDisk() } # Stop any compute processes Get-ComputeProcess | Stop-ComputeProcess -Force # Delete the directory using our dangerous utility unless told not to if (Test-Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR") { if (($null -ne $env:SKIP_ZAP_DUT) -or ($null -eq $env:SKIP_ALL_CLEANUP)) { Write-Host -ForegroundColor Green "INFO: Nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" docker-ci-zap "-folder=$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } else { Write-Host -ForegroundColor Magenta "WARN: Skip nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 Production Server workaround - Psched $reg = "HKLM:\System\CurrentControlSet\Services\Psched\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under Psched\Parameters" Write-Warning "Cleaning Psched..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 $reg = "HKLM:\System\CurrentControlSet\Services\WFPLWFS\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under WFPLWFS\Parameters" Write-Warning "Cleaning WFPLWFS..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } } catch { # Don't throw any errors onwards Throw $_ } } Try { Write-Host -ForegroundColor Cyan "`nINFO: executeCI.ps1 starting at $(date)`n" Write-Host -ForegroundColor Green "INFO: Script version $SCRIPT_VER" Set-PSDebug -Trace 0 # 1 to turn on $origPath="$env:PATH" # so we can restore it at the end $origDOCKER_HOST="$DOCKER_HOST" # So we can restore it at the end $origGOROOT="$env:GOROOT" # So we can restore it at the end $origGOPATH="$env:GOPATH" # So we can restore it at the end # Turn off progress bars $origProgressPreference=$global:ProgressPreference $global:ProgressPreference='SilentlyContinue' # Git version Write-Host -ForegroundColor Green "INFO: Running $(git version)" # OS Version $bl=(Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name BuildLabEx).BuildLabEx $a=$bl.ToString().Split(".") $Branch=$a[3] $WindowsBuild=$a[0]+"."+$a[1]+"."+$a[4] Write-Host -ForegroundColor green "INFO: Branch:$Branch Build:$WindowsBuild" # List the environment variables Write-Host -ForegroundColor green "INFO: Environment variables:" Get-ChildItem Env: | Out-String # PR if (-not ($null -eq $env:PR)) { Write-Output "INFO: PR#$env:PR (https://github.com/docker/docker/pull/$env:PR)" } # Make sure docker is installed if ($null -eq (Get-Command "docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker is not installed or not found on path" } # Make sure docker-ci-zap is installed if ($null -eq (Get-Command "docker-ci-zap" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker-ci-zap is not installed or not found on path" } # Make sure Windows Defender is disabled $defender = $false Try { $status = Get-MpComputerStatus if ($status) { if ($status.RealTimeProtectionEnabled) { $defender = $true } } } Catch {} if ($defender) { Write-Host -ForegroundColor Magenta "WARN: Windows Defender real time protection is enabled, which may cause some integration tests to fail" } # Make sure SOURCES_DRIVE is set if ($null -eq $env:SOURCES_DRIVE) { Throw "ERROR: Environment variable SOURCES_DRIVE is not set" } # Make sure TESTRUN_DRIVE is set if ($null -eq $env:TESTRUN_DRIVE) { Throw "ERROR: Environment variable TESTRUN_DRIVE is not set" } # Make sure SOURCES_SUBDIR is set if ($null -eq $env:SOURCES_SUBDIR) { Throw "ERROR: Environment variable SOURCES_SUBDIR is not set" } # Make sure TESTRUN_SUBDIR is set if ($null -eq $env:TESTRUN_SUBDIR) { Throw "ERROR: Environment variable TESTRUN_SUBDIR is not set" } # SOURCES_DRIVE\SOURCES_SUBDIR must be a directory and exist if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR")) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR must be an existing directory" } # Create the TESTRUN_DRIVE\TESTRUN_SUBDIR if it does not already exist New-Item -ItemType Directory -Force -Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" -ErrorAction SilentlyContinue | Out-Null Write-Host -ForegroundColor Green "INFO: Sources under $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\..." Write-Host -ForegroundColor Green "INFO: Test run under $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\..." # Check the intended source location is a directory if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker is not a directory!" } # Make sure we start at the root of the sources Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Running in $(Get-Location)" # Make sure we are in repo if (-not (Test-Path -PathType Leaf -Path ".\Dockerfile.windows")) { Throw "$(Get-Location) does not contain Dockerfile.windows!" } Write-Host -ForegroundColor Green "INFO: docker/docker repository was found" # Make sure microsoft/windowsservercore:latest image is installed in the control daemon. On public CI machines, windowsservercore.tar and nanoserver.tar # are pre-baked and tagged appropriately in the c:\baseimages directory, and can be directly loaded. # Note - this script will only work on 10B (Oct 2016) or later machines! Not 9D or previous due to image tagging assumptions. # # On machines not on Microsoft corpnet, or those which have not been pre-baked, we have to docker pull the image in which case it will # will come in directly as microsoft/windowsservercore:latest. The ultimate goal of all this code is to ensure that whatever, # we have microsoft/windowsservercore:latest # # Note we cannot use (as at Oct 2016) nanoserver as the control daemons base image, even if nanoserver is used in the tests themselves. $ErrorActionPreference = "SilentlyContinue" $ControlDaemonBaseImage="windowsservercore" $readBaseFrom="c" if ($((docker images --format "{{.Repository}}:{{.Tag}}" | Select-String $("microsoft/"+$ControlDaemonBaseImage+":latest") | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("$env:SOURCES_DRIVE`:\baseimages\"+$ControlDaemonBaseImage+".tar")) { # An optimization for CI servers to copy it to the D: drive which is an SSD. if ($env:SOURCES_DRIVE -ne $env:TESTRUN_DRIVE) { $readBaseFrom=$env:TESTRUN_DRIVE if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages")) { New-Item "$env:TESTRUN_DRIVE`:\baseimages" -type directory | Out-Null } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\windowsservercore.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\nanoserver.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\nanoserver.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } $readBaseFrom=$env:TESTRUN_DRIVE } Write-Host -ForegroundColor Green "INFO: Loading"$ControlDaemonBaseImage".tar from disk. This may take some time..." $ErrorActionPreference = "SilentlyContinue" docker load -i $("$readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") } Write-Host -ForegroundColor Green "INFO: docker load of"$ControlDaemonBaseImage" completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:latest Write-Host -ForegroundColor Green $("INFO: Pulling $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG from docker hub. This may take some time...") $ErrorActionPreference = "SilentlyContinue" docker pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage") docker tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image"$("microsoft/"+$ControlDaemonBaseImage+":latest")"is already loaded in the control daemon" } # Inspect the pulled image to get the version directly $ErrorActionPreference = "SilentlyContinue" $imgVersion = $(docker inspect $("microsoft/"+$ControlDaemonBaseImage) --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of microsoft/"+$ControlDaemonBaseImage+":latest is '"+$imgVersion+"'") # Provide the docker version for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker version $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Write-Host Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host -ForegroundColor Green " Failed to get a response from the control daemon. It may be down." Write-Host -ForegroundColor Green " Try re-running this CI job, or ask on #docker-maintainers on docker slack" Write-Host -ForegroundColor Green " to see if the daemon is running. Also check the service configuration." Write-Host -ForegroundColor Green " DOCKER_HOST is set to $DOCKER_HOST." Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Same as above, but docker info Write-Host -ForegroundColor Green "INFO: Docker info of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker info $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Get the commit has and verify we have something $ErrorActionPreference = "SilentlyContinue" $COMMITHASH=$(git rev-parse --short HEAD) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" } Write-Host -ForegroundColor Green "INFO: Commit hash is $COMMITHASH" # Nuke everything and go back to our sources after Nuke-Everything cd "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" # Redirect to a temporary location. $TEMPORIG=$env:TEMP if ($null -eq $env:BUILD_NUMBER) { $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\CI-$COMMITHASH" } else { # individual temporary location per CI build that better matches the BUILD_URL $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\$env:BRANCH_NAME\$env:BUILD_NUMBER" } $env:LOCALAPPDATA="$env:TEMP\localappdata" $errorActionPreference='Stop' New-Item -ItemType Directory "$env:TEMP" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\userprofile" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults\unittests" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\localappdata" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\binary" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\installer" -ErrorAction SilentlyContinue | Out-Null if ($null -eq $env:SKIP_COPY_GO) { # Wipe the previous version of GO - we're going to get it out of the image if (Test-Path "$env:TEMP\go") { Remove-Item "$env:TEMP\go" -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } New-Item -ItemType Directory "$env:TEMP\go" -ErrorAction SilentlyContinue | Out-Null } Write-Host -ForegroundColor Green "INFO: Location for testing is $env:TEMP" # CI Integrity check - ensure Dockerfile.windows and Dockerfile go versions match $goVersionDockerfileWindows=(Select-String -Path ".\Dockerfile.windows" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value $goVersionDockerfile=(Select-String -Path ".\Dockerfile" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value if ($null -eq $goVersionDockerfile) { Throw "ERROR: Failed to extract golang version from Dockerfile" } Write-Host -ForegroundColor Green "INFO: Validating GOLang consistency in Dockerfile.windows..." if (-not ($goVersionDockerfile -eq $goVersionDockerfileWindows)) { Throw "ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. $goVersionDockerfile $goVersionDockerfileWindows" } # Build the image if ($null -eq $env:SKIP_IMAGE_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the image from Dockerfile.windows at $(Get-Date)..." Write-Host $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { docker build --build-arg=GO_VERSION -t docker -f Dockerfile.windows . | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build image from Dockerfile.windows" } Write-Host -ForegroundColor Green "INFO: Image build ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the docker image" } # Following at the moment must be docker\docker as it's dictated by dockerfile.Windows $contPath="$COMMITHASH`:c`:\gopath\src\github.com\docker\docker\bundles" # After https://github.com/docker/docker/pull/30290, .git was added to .dockerignore. Therefore # we have to calculate unsupported outside of the container, and pass the commit ID in through # an environment variable for the binary build $CommitUnsupported="" if ($(git status --porcelain --untracked-files=no).Length -ne 0) { $CommitUnsupported="-unsupported" } # Build the binary in a container unless asked to skip it. if ($null -eq $env:SKIP_BINARY_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the test binaries at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" docker rm -f $COMMITHASH 2>&1 | Out-Null if ($CommitUnsupported -ne "") { Write-Host "" Write-Warning "This version is unsupported because there are uncommitted file(s)." Write-Warning "Either commit these changes, or add them to .gitignore." git status --porcelain --untracked-files=no | Write-Warning Write-Host "" } $Duration=$(Measure-Command {docker run --name $COMMITHASH -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -Daemon -Client | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build binary" } Write-Host -ForegroundColor Green "INFO: Binaries build ended at $(Get-Date). Duration`:$Duration" # Copy the binaries and the generated version_autogen.go out of the container $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\docker.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the client binary (docker.exe) to $env:TEMP\binary" } docker cp "$contPath\dockerd.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the daemon binary (dockerd.exe) to $env:TEMP\binary" } docker cp "$COMMITHASH`:c`:\gopath\bin\gotestsum.exe" $env:TEMP\binary\ if (-not (Test-Path "$env:TEMP\binary\gotestsum.exe")) { Throw "ERROR: gotestsum.exe not found...." ` } $ErrorActionPreference = "Stop" # Copy the built dockerd.exe to dockerd-$COMMITHASH.exe so that easily spotted in task manager. Write-Host -ForegroundColor Green "INFO: Copying the built daemon binary to $env:TEMP\binary\dockerd-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\dockerd.exe $env:TEMP\binary\dockerd-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue # Copy the built docker.exe to docker-$COMMITHASH.exe Write-Host -ForegroundColor Green "INFO: Copying the built client binary to $env:TEMP\binary\docker-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\docker.exe $env:TEMP\binary\docker-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the binaries" } Write-Host -ForegroundColor Green "INFO: Copying dockerversion from the container..." $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\..\dockerversion\version_autogen.go" "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the generated version_autogen.go to $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" } $ErrorActionPreference = "Stop" # Grab the golang installer out of the built image. That way, we know we are consistent once extracted and paths set, # so there's no need to re-deploy on account of an upgrade to the version of GO being used in docker. if ($null -eq $env:SKIP_COPY_GO) { Write-Host -ForegroundColor Green "INFO: Copying the golang package from the container to $env:TEMP\installer\go.zip..." docker cp "$COMMITHASH`:c`:\go.zip" $env:TEMP\installer\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the golang installer 'go.zip' from container:c:\go.zip to $env:TEMP\installer" } $ErrorActionPreference = "Stop" # Extract the golang installer Write-Host -ForegroundColor Green "INFO: Extracting go.zip to $env:TEMP\go" $Duration=$(Measure-Command { Expand-Archive $env:TEMP\installer\go.zip $env:TEMP -Force | Out-Null}) Write-Host -ForegroundColor Green "INFO: Extraction ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping copying and extracting golang from the image" } # Set the GOPATH Write-Host -ForegroundColor Green "INFO: Updating the golang and path environment variables" $env:GOPATH="$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR" Write-Host -ForegroundColor Green "INFO: GOPATH=$env:GOPATH" # Set the path to have the version of go from the image at the front $env:PATH="$env:TEMP\go\bin;$env:PATH" # Set the GOROOT to be our copy of go from the image $env:GOROOT="$env:TEMP\go" Write-Host -ForegroundColor Green "INFO: $(go version)" # Work out the -H parameter for the daemon under test (DASHH_DUT) and client under test (DASHH_CUT) #$DASHH_DUT="npipe:////./pipe/$COMMITHASH" # Can't do remote named pipe #$ip = (resolve-dnsname $env:COMPUTERNAME -type A -NoHostsFile -LlmnrNetbiosOnly).IPAddress # Useful to tie down $DASHH_CUT="tcp://127.0.0.1`:2357" # Not a typo for 2375! $DASHH_DUT="tcp://0.0.0.0:2357" # Not a typo for 2375! # Arguments for the daemon under test $dutArgs=@() $dutArgs += "-H $DASHH_DUT" $dutArgs += "--data-root $env:TEMP\daemon" $dutArgs += "--pidfile $env:TEMP\docker.pid" # Save the PID file so we can nuke it if set $pidFile="$env:TEMP\docker.pid" # Arguments: Are we starting the daemon under test in debug mode? if (-not ("$env:DOCKER_DUT_DEBUG" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test in debug mode" $dutArgs += "-D" } # Arguments: Are we starting the daemon under test with Hyper-V containers as the default isolation? if (-not ("$env:DOCKER_DUT_HYPERV" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with Hyper-V containers as the default" $dutArgs += "--exec-opt isolation=hyperv" } # Arguments: Allow setting optional storage-driver options # example usage: DOCKER_STORAGE_OPTS="lcow.globalmode=false,lcow.kernel=kernel.efi" if (-not ("$env:DOCKER_STORAGE_OPTS" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with storage-driver options ${env:DOCKER_STORAGE_OPTS}" $env:DOCKER_STORAGE_OPTS.Split(",") | ForEach { $dutArgs += "--storage-opt $_" } } # Start the daemon under test, ensuring everything is redirected to folders under $TEMP. # Important - we launch the -$COMMITHASH version so that we can kill it without # killing the control daemon. Write-Host -ForegroundColor Green "INFO: Starting a daemon under test..." Write-Host -ForegroundColor Green "INFO: Args: $dutArgs" New-Item -ItemType Directory $env:TEMP\daemon -ErrorAction SilentlyContinue | Out-Null # Cannot fathom why, but always writes to stderr.... Start-Process "$env:TEMP\binary\dockerd-$COMMITHASH" ` -ArgumentList $dutArgs ` -RedirectStandardOutput "$env:TEMP\dut.out" ` -RedirectStandardError "$env:TEMP\dut.err" Write-Host -ForegroundColor Green "INFO: Process started successfully." $daemonStarted=1 # Start tailing the daemon under test if the command is installed if ($null -ne (Get-Command "tail" -ErrorAction SilentlyContinue)) { Write-Host -ForegroundColor green "INFO: Start tailing logs of the daemon under tests" $tail = Start-Process "tail" -ArgumentList "-f $env:TEMP\dut.out" -PassThru -ErrorAction SilentlyContinue } # Verify we can get the daemon under test to respond $tries=20 Write-Host -ForegroundColor Green "INFO: Waiting for the daemon under test to start..." while ($true) { $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version 2>&1 | Out-Null $ErrorActionPreference = "Stop" if ($LastExitCode -eq 0) { break } $tries-- if ($tries -le 0) { Throw "ERROR: Failed to get a response from the daemon under test" } Write-Host -NoNewline "." sleep 1 } Write-Host -ForegroundColor Green "INFO: Daemon under test started and replied!" # Provide the docker version of the daemon under test for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker info Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker images Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Default to windowsservercore for the base image used for the tests. The "docker" image # and the control daemon use microsoft/windowsservercore regardless. This is *JUST* for the tests. if ($null -eq $env:WINDOWS_BASE_IMAGE) { $env:WINDOWS_BASE_IMAGE="microsoft/windowsservercore" } if ($null -eq $env:WINDOWS_BASE_IMAGE_TAG) { $env:WINDOWS_BASE_IMAGE_TAG="latest" } # Lowercase and make sure it has a microsoft/ prefix $env:WINDOWS_BASE_IMAGE = $env:WINDOWS_BASE_IMAGE.ToLower() if (! $($env:WINDOWS_BASE_IMAGE -Split "/")[0] -match "microsoft") { Throw "ERROR: WINDOWS_BASE_IMAGE should start microsoft/ or mcr.microsoft.com/" } Write-Host -ForegroundColor Green "INFO: Base image for tests is $env:WINDOWS_BASE_IMAGE" $ErrorActionPreference = "SilentlyContinue" if ($((& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images --format "{{.Repository}}:{{.Tag}}" | Select-String "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("c:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar")) { Write-Host -ForegroundColor Green "INFO: Loading"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]".tar from disk into the daemon under test. This may take some time..." $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" load -i $("$readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar into daemon under test") } Write-Host -ForegroundColor Green "INFO: docker load of"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]" into daemon under test completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:tagname Write-Host -ForegroundColor Green $("INFO: Pulling "+$env:WINDOWS_BASE_IMAGE+":"+$env:WINDOWS_BASE_IMAGE_TAG+" from docker hub into daemon under test. This may take some time...") $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage in daemon under test") & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is already loaded in the daemon under test" } # Inspect the pulled or loaded image to get the version directly $ErrorActionPreference = "SilentlyContinue" $dutimgVersion = $(&"$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" inspect "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is '"+$dutimgVersion+"'") # Run the validation tests unless SKIP_VALIDATION_TESTS is defined. if ($null -eq $env:SKIP_VALIDATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running validation tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { hack\make.ps1 -DCO -GoFormat -PkgImports | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Validation tests failed" } Write-Host -ForegroundColor Green "INFO: Validation tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping validation tests" } # Run the unit tests inside a container unless SKIP_UNIT_TESTS is defined if ($null -eq $env:SKIP_UNIT_TESTS) { $ContainerNameForUnitTests = $COMMITHASH + "_UnitTests" Write-Host -ForegroundColor Cyan "INFO: Running unit tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command {docker run --name $ContainerNameForUnitTests -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -TestUnit | Out-Host }) $TestRunExitCode = $LastExitCode $ErrorActionPreference = "Stop" # Saving where jenkins will take a look at..... New-Item -Force -ItemType Directory bundles | Out-Null $unitTestsContPath="$ContainerNameForUnitTests`:c`:\gopath\src\github.com\docker\docker\bundles" $JunitExpectedContFilePath = "$unitTestsContPath\junit-report-unit-tests.xml" docker cp $JunitExpectedContFilePath "bundles" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the unit tests report ($JunitExpectedContFilePath) to bundles" } if (Test-Path "bundles\junit-report-unit-tests.xml") { Write-Host -ForegroundColor Magenta "INFO: Unit tests results(bundles\junit-report-unit-tests.xml) exist. pwd=$pwd" } else { Write-Host -ForegroundColor Magenta "ERROR: Unit tests results(bundles\junit-report-unit-tests.xml) do not exist. pwd=$pwd" } if (-not($TestRunExitCode -eq 0)) { Throw "ERROR: Unit tests failed" } Write-Host -ForegroundColor Green "INFO: Unit tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping unit tests" } # Add the Windows busybox image. Needed for WCOW integration tests if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Green "INFO: Building busybox" $ErrorActionPreference = "SilentlyContinue" $(& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" build -t busybox --build-arg WINDOWS_BASE_IMAGE --build-arg WINDOWS_BASE_IMAGE_TAG "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\contrib\busybox\" | Out-Host) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build busybox image" } Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Run the WCOW integration tests unless SKIP_INTEGRATION_TESTS is defined if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running integration tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" # Location of the daemon under test. $env:OrigDOCKER_HOST="$env:DOCKER_HOST" #https://blogs.technet.microsoft.com/heyscriptingguy/2011/09/20/solve-problems-with-external-command-lines-in-powershell/ is useful to see tokenising $jsonFilePath = "..\\bundles\\go-test-report-intcli-tests.json" $xmlFilePath = "..\\bundles\\junit-report-intcli-tests.xml" $c = "gotestsum --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " if ($null -ne $env:INTEGRATION_TEST_NAME) { # Makes is quicker for debugging to be able to run only a subset of the integration tests $c += "`"-test.run`" " $c += "`"$env:INTEGRATION_TEST_NAME`" " Write-Host -ForegroundColor Magenta "WARN: Only running integration tests matching $env:INTEGRATION_TEST_NAME" } $c += "`"-tags`" " + "`"autogen`" " $c += "`"-test.timeout`" " + "`"200m`" " if ($null -ne $env:INTEGRATION_IN_CONTAINER) { Write-Host -ForegroundColor Green "INFO: Integration tests being run inside a container" # Note we talk back through the containers gateway address # And the ridiculous lengths we have to go to get the default gateway address... (GetNetIPConfiguration doesn't work in nanoserver) # I just could not get the escaping to work in a single command, so output $c to a file and run that in the container instead... # Not the prettiest, but it works. $c | Out-File -Force "$env:TEMP\binary\runIntegrationCLI.ps1" $Duration= $(Measure-Command { & docker run ` --rm ` -e c=$c ` --workdir "c`:\gopath\src\github.com\docker\docker\integration-cli" ` -v "$env:TEMP\binary`:c:\target" ` docker ` "`$env`:PATH`='c`:\target;'+`$env:PATH`; `$env:DOCKER_HOST`='tcp`://'+(ipconfig | select -last 1).Substring(39)+'`:2357'; c:\target\runIntegrationCLI.ps1" | Out-Host } ) } else { $env:DOCKER_HOST=$DASHH_CUT $env:PATH="$env:TEMP\binary;$env:PATH;" # Force to use the test binaries, not the host ones. $env:GO111MODULE="off" Write-Host -ForegroundColor Green "INFO: DOCKER_HOST at $DASHH_CUT" $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Cyan "INFO: Integration API tests being run from the host:" $start=(Get-Date); Invoke-Expression ".\hack\make.ps1 -TestIntegration"; $Duration=New-Timespan -Start $start -End (Get-Date) $IntTestsRunResult = $LastExitCode $ErrorActionPreference = "Stop" if (-not($IntTestsRunResult -eq 0)) { Throw "ERROR: Integration API tests failed at $(Get-Date). Duration`:$Duration" } $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Green "INFO: Integration CLI tests being run from the host:" Write-Host -ForegroundColor Green "INFO: $c" Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\integration-cli" # Explicit to not use measure-command otherwise don't get output as it goes $start=(Get-Date); Invoke-Expression $c; $Duration=New-Timespan -Start $start -End (Get-Date) } $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Integration CLI tests failed at $(Get-Date). Duration`:$Duration" } Write-Host -ForegroundColor Green "INFO: Integration tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping integration tests" } # Docker info now to get counts (after or if jjh/containercounts is merged) if ($daemonStarted -eq 1) { Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test at end of run" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Stop the daemon under test if (Test-Path "$env:TEMP\docker.pid") { $p=Get-Content "$env:TEMP\docker.pid" -raw if (($null -ne $p) -and ($daemonStarted -eq 1)) { Write-Host -ForegroundColor green "INFO: Stopping daemon under test" taskkill -f -t -pid $p #sleep 5 } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } # Stop the tail process (if started) if ($null -ne $tail) { Write-Host -ForegroundColor green "INFO: Stop tailing logs of the daemon under tests" Stop-Process -InputObject $tail -Force } Write-Host -ForegroundColor Green "INFO: executeCI.ps1 Completed successfully at $(Get-Date)." } Catch [Exception] { $FinallyColour="Red" Write-Host -ForegroundColor Red ("`r`n`r`nERROR: Failed '$_' at $(Get-Date)") Write-Host -ForegroundColor Red ($_.InvocationInfo.PositionMessage) Write-Host "`n`n" # Exit to ensure Jenkins captures it. Don't do this in the ISE or interactive Powershell - they will catch the Throw onwards. if ( ([bool]([Environment]::GetCommandLineArgs() -Like '*-NonInteractive*')) -and ` ([bool]([Environment]::GetCommandLineArgs() -NotLike "*Powershell_ISE.exe*"))) { exit 1 } Throw $_ } Finally { $ErrorActionPreference="SilentlyContinue" $global:ProgressPreference=$origProgressPreference Write-Host -ForegroundColor Green "INFO: Tidying up at end of run" # Restore the path if ($null -ne $origPath) { $env:PATH=$origPath } # Restore the DOCKER_HOST if ($null -ne $origDOCKER_HOST) { $env:DOCKER_HOST=$origDOCKER_HOST } # Restore the GOROOT and GOPATH variables if ($null -ne $origGOROOT) { $env:GOROOT=$origGOROOT } if ($null -ne $origGOPATH) { $env:GOPATH=$origGOPATH } # Dump the daemon log. This will include any possible panic stack in the .err. if (($daemonStarted -eq 1) -and ($(Get-Item "$env:TEMP\dut.err").Length -gt 0)) { Write-Host -ForegroundColor Cyan "----------- DAEMON LOG ------------" Get-Content "$env:TEMP\dut.err" -ErrorAction SilentlyContinue | Write-Host -ForegroundColor Cyan Write-Host -ForegroundColor Cyan "----------- END DAEMON LOG --------" } # Save the daemon under test log if ($daemonStarted -eq 1) { Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.out) to bundles\CIDUT.out" Copy-Item "$env:TEMP\dut.out" "bundles\CIDUT.out" -Force -ErrorAction SilentlyContinue Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.err) to bundles\CIDUT.err" Copy-Item "$env:TEMP\dut.err" "bundles\CIDUT.err" -Force -ErrorAction SilentlyContinue } Set-Location "$env:SOURCES_DRIVE\$env:SOURCES_SUBDIR" -ErrorAction SilentlyContinue Nuke-Everything # Restore the TEMP path if ($null -ne $TEMPORIG) { $env:TEMP="$TEMPORIG" } $Dur=New-TimeSpan -Start $StartTime -End $(Get-Date) Write-Host -ForegroundColor $FinallyColour "`nINFO: executeCI.ps1 exiting at $(date). Duration $dur`n" }
# WARNING: When editing this file, consider submitting a PR to # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/executeCI.ps1, and make sure that # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1 isn't broken. # Validate using a test context in Jenkins, then copy/paste into Jenkins production. # # Jenkins CI scripts for Windows to Windows CI (Powershell Version) # By John Howard (@jhowardmsft) January 2016 - bash version; July 2016 Ported to PowerShell $ErrorActionPreference = 'Stop' $StartTime=Get-Date Write-Host -ForegroundColor Red "DEBUG: print all environment variables to check how Jenkins runs this script" $allArgs = [Environment]::GetCommandLineArgs() Write-Host -ForegroundColor Red $allArgs Write-Host -ForegroundColor Red "----------------------------------------------------------------------------" # ------------------------------------------------------------------------------------------- # When executed, we rely on four variables being set in the environment: # # [The reason for being environment variables rather than parameters is historical. No reason # why it couldn't be updated.] # # SOURCES_DRIVE is the drive on which the sources being tested are cloned from. # This should be a straight drive letter, no platform semantics. # For example 'c' # # SOURCES_SUBDIR is the top level directory under SOURCES_DRIVE where the # sources are cloned to. There are no platform semantics in this # as it does not include slashes. # For example 'gopath' # # Based on the above examples, it would be expected that Jenkins # would clone the sources being tested to # SOURCES_DRIVE\SOURCES_SUBDIR\src\github.com\docker\docker, or # c:\gopath\src\github.com\docker\docker # # TESTRUN_DRIVE is the drive where we build the binary on and redirect everything # to for the daemon under test. On an Azure D2 type host which has # an SSD temporary storage D: drive, this is ideal for performance. # For example 'd' # # TESTRUN_SUBDIR is the top level directory under TESTRUN_DRIVE where we redirect # everything to for the daemon under test. For example 'CI'. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\CI-<CommitID> or # d:\CI\CI-<CommitID> # # Optional environment variables help in CI: # # BUILD_NUMBER + BRANCH_NAME are optional variables to be added to the directory below TESTRUN_SUBDIR # to have individual folder per CI build. If some files couldn't be # cleaned up and we want to re-run the build in CI. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\PR-<PR-Number>\<BuildNumber> or # d:\CI\PR-<PR-Number>\<BuildNumber> # # In addition, the following variables can control the run configuration: # # DOCKER_DUT_DEBUG if defined starts the daemon under test in debug mode. # # DOCKER_STORAGE_OPTS comma-separated list of optional storage driver options for the daemon under test # examples: # DOCKER_STORAGE_OPTS="size=40G" # # SKIP_VALIDATION_TESTS if defined skips the validation tests # # SKIP_UNIT_TESTS if defined skips the unit tests # # SKIP_INTEGRATION_TESTS if defined skips the integration tests # # SKIP_COPY_GO if defined skips copy the go installer from the image # # DOCKER_DUT_HYPERV if default daemon under test default isolation is hyperv # # INTEGRATION_TEST_NAME to only run partial tests eg "TestInfo*" will only run # any tests starting "TestInfo" # # SKIP_BINARY_BUILD if defined skips building the binary # # SKIP_ZAP_DUT if defined doesn't zap the daemon under test directory # # SKIP_IMAGE_BUILD if defined doesn't build the 'docker' image # # INTEGRATION_IN_CONTAINER if defined, runs the integration tests from inside a container. # As of July 2016, there are known issues with this. # # SKIP_ALL_CLEANUP if defined, skips any cleanup at the start or end of the run # # WINDOWS_BASE_IMAGE if defined, uses that as the base image. Note that the # docker integration tests are also coded to use the same # environment variable, and if no set, defaults to microsoft/windowsservercore # # WINDOWS_BASE_IMAGE_TAG if defined, uses that as the tag name for the base image. # if no set, defaults to latest # # ------------------------------------------------------------------------------------------- # # Jenkins Integration. Add a Windows Powershell build step as follows: # # Write-Host -ForegroundColor green "INFO: Jenkins build step starting" # $CISCRIPT_DEFAULT_LOCATION = "https://raw.githubusercontent.com/moby/moby/master/hack/ci/windows.ps1" # $CISCRIPT_LOCAL_LOCATION = "$env:TEMP\executeCI.ps1" # Write-Host -ForegroundColor green "INFO: Removing cached execution script" # Remove-Item $CISCRIPT_LOCAL_LOCATION -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # $wc = New-Object net.webclient # try { # Write-Host -ForegroundColor green "INFO: Downloading latest execution script..." # $wc.Downloadfile($CISCRIPT_DEFAULT_LOCATION, $CISCRIPT_LOCAL_LOCATION) # } # catch [System.Net.WebException] # { # Throw ("Failed to download: $_") # } # & $CISCRIPT_LOCAL_LOCATION # ------------------------------------------------------------------------------------------- $SCRIPT_VER="05-Feb-2019 09:03 PDT" $FinallyColour="Cyan" #$env:DOCKER_DUT_DEBUG="yes" # Comment out to not be in debug mode #$env:SKIP_UNIT_TESTS="yes" #$env:SKIP_VALIDATION_TESTS="yes" #$env:SKIP_ZAP_DUT="" #$env:SKIP_BINARY_BUILD="yes" #$env:INTEGRATION_TEST_NAME="" #$env:SKIP_IMAGE_BUILD="yes" #$env:SKIP_ALL_CLEANUP="yes" #$env:INTEGRATION_IN_CONTAINER="yes" #$env:WINDOWS_BASE_IMAGE="" #$env:SKIP_COPY_GO="yes" #$env:INTEGRATION_TESTFLAGS="-test.v" Function Nuke-Everything { $ErrorActionPreference = 'SilentlyContinue' try { if ($null -eq $env:SKIP_ALL_CLEANUP) { Write-Host -ForegroundColor green "INFO: Nuke-Everything..." $containerCount = ($(docker ps -aq | Measure-Object -line).Lines) if (-not $LastExitCode -eq 0) { Throw "ERROR: Failed to get container count from control daemon while nuking" } Write-Host -ForegroundColor green "INFO: Container count on control daemon to delete is $containerCount" if ($(docker ps -aq | Measure-Object -line).Lines -gt 0) { docker rm -f $(docker ps -aq) } $allImages = $(docker images --format "{{.Repository}}#{{.ID}}") $toRemove = ($allImages | Select-String -NotMatch "servercore","nanoserver","docker","busybox") $imageCount = ($toRemove | Measure-Object -line).Lines if ($imageCount -gt 0) { Write-Host -Foregroundcolor green "INFO: Non-base image count on control daemon to delete is $imageCount" docker rmi -f ($toRemove | Foreach-Object { $_.ToString().Split("#")[1] }) } } else { Write-Host -ForegroundColor Magenta "WARN: Skipping cleanup of images and containers" } # Kill any spurious daemons. The '-' is IMPORTANT otherwise will kill the control daemon! $pids=$(get-process | where-object {$_.ProcessName -like 'dockerd-*'}).id foreach ($p in $pids) { Write-Host "INFO: Killing daemon with PID $p" Stop-Process -Id $p -Force -ErrorAction SilentlyContinue } if ($null -ne $pidFile) { Write-Host "INFO: Tidying pidfile $pidfile" if (Test-Path $pidFile) { $p=Get-Content $pidFile -raw if ($null -ne $p){ Write-Host -ForegroundColor green "INFO: Stopping possible daemon pid $p" taskkill -f -t -pid $p } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } } Stop-Process -name "cc1" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "link" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "compile" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "ld" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "go" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git-remote-https" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "integration-cli.test" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "tail" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # Detach any VHDs gwmi msvm_mountedstorageimage -namespace root/virtualization/v2 -ErrorAction SilentlyContinue | foreach-Object {$_.DetachVirtualHardDisk() } # Stop any compute processes Get-ComputeProcess | Stop-ComputeProcess -Force # Delete the directory using our dangerous utility unless told not to if (Test-Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR") { if (($null -ne $env:SKIP_ZAP_DUT) -or ($null -eq $env:SKIP_ALL_CLEANUP)) { Write-Host -ForegroundColor Green "INFO: Nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" docker-ci-zap "-folder=$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } else { Write-Host -ForegroundColor Magenta "WARN: Skip nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 Production Server workaround - Psched $reg = "HKLM:\System\CurrentControlSet\Services\Psched\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under Psched\Parameters" Write-Warning "Cleaning Psched..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 $reg = "HKLM:\System\CurrentControlSet\Services\WFPLWFS\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under WFPLWFS\Parameters" Write-Warning "Cleaning WFPLWFS..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } } catch { # Don't throw any errors onwards Throw $_ } } Try { Write-Host -ForegroundColor Cyan "`nINFO: executeCI.ps1 starting at $(date)`n" Write-Host -ForegroundColor Green "INFO: Script version $SCRIPT_VER" Set-PSDebug -Trace 0 # 1 to turn on $origPath="$env:PATH" # so we can restore it at the end $origDOCKER_HOST="$DOCKER_HOST" # So we can restore it at the end $origGOROOT="$env:GOROOT" # So we can restore it at the end $origGOPATH="$env:GOPATH" # So we can restore it at the end # Turn off progress bars $origProgressPreference=$global:ProgressPreference $global:ProgressPreference='SilentlyContinue' # Git version Write-Host -ForegroundColor Green "INFO: Running $(git version)" # OS Version $bl=(Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name BuildLabEx).BuildLabEx $a=$bl.ToString().Split(".") $Branch=$a[3] $WindowsBuild=$a[0]+"."+$a[1]+"."+$a[4] Write-Host -ForegroundColor green "INFO: Branch:$Branch Build:$WindowsBuild" # List the environment variables Write-Host -ForegroundColor green "INFO: Environment variables:" Get-ChildItem Env: | Out-String # PR if (-not ($null -eq $env:PR)) { Write-Output "INFO: PR#$env:PR (https://github.com/docker/docker/pull/$env:PR)" } # Make sure docker is installed if ($null -eq (Get-Command "docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker is not installed or not found on path" } # Make sure docker-ci-zap is installed if ($null -eq (Get-Command "docker-ci-zap" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker-ci-zap is not installed or not found on path" } # Make sure Windows Defender is disabled $defender = $false Try { $status = Get-MpComputerStatus if ($status) { if ($status.RealTimeProtectionEnabled) { $defender = $true } } } Catch {} if ($defender) { Write-Host -ForegroundColor Magenta "WARN: Windows Defender real time protection is enabled, which may cause some integration tests to fail" } # Make sure SOURCES_DRIVE is set if ($null -eq $env:SOURCES_DRIVE) { Throw "ERROR: Environment variable SOURCES_DRIVE is not set" } # Make sure TESTRUN_DRIVE is set if ($null -eq $env:TESTRUN_DRIVE) { Throw "ERROR: Environment variable TESTRUN_DRIVE is not set" } # Make sure SOURCES_SUBDIR is set if ($null -eq $env:SOURCES_SUBDIR) { Throw "ERROR: Environment variable SOURCES_SUBDIR is not set" } # Make sure TESTRUN_SUBDIR is set if ($null -eq $env:TESTRUN_SUBDIR) { Throw "ERROR: Environment variable TESTRUN_SUBDIR is not set" } # SOURCES_DRIVE\SOURCES_SUBDIR must be a directory and exist if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR")) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR must be an existing directory" } # Create the TESTRUN_DRIVE\TESTRUN_SUBDIR if it does not already exist New-Item -ItemType Directory -Force -Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" -ErrorAction SilentlyContinue | Out-Null Write-Host -ForegroundColor Green "INFO: Sources under $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\..." Write-Host -ForegroundColor Green "INFO: Test run under $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\..." # Check the intended source location is a directory if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker is not a directory!" } # Make sure we start at the root of the sources Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Running in $(Get-Location)" # Make sure we are in repo if (-not (Test-Path -PathType Leaf -Path ".\Dockerfile.windows")) { Throw "$(Get-Location) does not contain Dockerfile.windows!" } Write-Host -ForegroundColor Green "INFO: docker/docker repository was found" # Make sure microsoft/windowsservercore:latest image is installed in the control daemon. On public CI machines, windowsservercore.tar and nanoserver.tar # are pre-baked and tagged appropriately in the c:\baseimages directory, and can be directly loaded. # Note - this script will only work on 10B (Oct 2016) or later machines! Not 9D or previous due to image tagging assumptions. # # On machines not on Microsoft corpnet, or those which have not been pre-baked, we have to docker pull the image in which case it will # will come in directly as microsoft/windowsservercore:latest. The ultimate goal of all this code is to ensure that whatever, # we have microsoft/windowsservercore:latest # # Note we cannot use (as at Oct 2016) nanoserver as the control daemons base image, even if nanoserver is used in the tests themselves. $ErrorActionPreference = "SilentlyContinue" $ControlDaemonBaseImage="windowsservercore" $readBaseFrom="c" if ($((docker images --format "{{.Repository}}:{{.Tag}}" | Select-String $("microsoft/"+$ControlDaemonBaseImage+":latest") | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("$env:SOURCES_DRIVE`:\baseimages\"+$ControlDaemonBaseImage+".tar")) { # An optimization for CI servers to copy it to the D: drive which is an SSD. if ($env:SOURCES_DRIVE -ne $env:TESTRUN_DRIVE) { $readBaseFrom=$env:TESTRUN_DRIVE if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages")) { New-Item "$env:TESTRUN_DRIVE`:\baseimages" -type directory | Out-Null } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\windowsservercore.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\nanoserver.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\nanoserver.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } $readBaseFrom=$env:TESTRUN_DRIVE } Write-Host -ForegroundColor Green "INFO: Loading"$ControlDaemonBaseImage".tar from disk. This may take some time..." $ErrorActionPreference = "SilentlyContinue" docker load -i $("$readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") } Write-Host -ForegroundColor Green "INFO: docker load of"$ControlDaemonBaseImage" completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:latest Write-Host -ForegroundColor Green $("INFO: Pulling $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG from docker hub. This may take some time...") $ErrorActionPreference = "SilentlyContinue" docker pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage") docker tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image"$("microsoft/"+$ControlDaemonBaseImage+":latest")"is already loaded in the control daemon" } # Inspect the pulled image to get the version directly $ErrorActionPreference = "SilentlyContinue" $imgVersion = $(docker inspect $("microsoft/"+$ControlDaemonBaseImage) --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of microsoft/"+$ControlDaemonBaseImage+":latest is '"+$imgVersion+"'") # Provide the docker version for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker version $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Write-Host Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host -ForegroundColor Green " Failed to get a response from the control daemon. It may be down." Write-Host -ForegroundColor Green " Try re-running this CI job, or ask on #docker-maintainers on docker slack" Write-Host -ForegroundColor Green " to see if the daemon is running. Also check the service configuration." Write-Host -ForegroundColor Green " DOCKER_HOST is set to $DOCKER_HOST." Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Same as above, but docker info Write-Host -ForegroundColor Green "INFO: Docker info of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker info $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Get the commit has and verify we have something $ErrorActionPreference = "SilentlyContinue" $COMMITHASH=$(git rev-parse --short HEAD) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" } Write-Host -ForegroundColor Green "INFO: Commit hash is $COMMITHASH" # Nuke everything and go back to our sources after Nuke-Everything cd "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" # Redirect to a temporary location. $TEMPORIG=$env:TEMP if ($null -eq $env:BUILD_NUMBER) { $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\CI-$COMMITHASH" } else { # individual temporary location per CI build that better matches the BUILD_URL $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\$env:BRANCH_NAME\$env:BUILD_NUMBER" } $env:LOCALAPPDATA="$env:TEMP\localappdata" $errorActionPreference='Stop' New-Item -ItemType Directory "$env:TEMP" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\userprofile" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults\unittests" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\localappdata" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\binary" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\installer" -ErrorAction SilentlyContinue | Out-Null if ($null -eq $env:SKIP_COPY_GO) { # Wipe the previous version of GO - we're going to get it out of the image if (Test-Path "$env:TEMP\go") { Remove-Item "$env:TEMP\go" -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } New-Item -ItemType Directory "$env:TEMP\go" -ErrorAction SilentlyContinue | Out-Null } Write-Host -ForegroundColor Green "INFO: Location for testing is $env:TEMP" # CI Integrity check - ensure Dockerfile.windows and Dockerfile go versions match $goVersionDockerfileWindows=(Select-String -Path ".\Dockerfile.windows" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value $goVersionDockerfile=(Select-String -Path ".\Dockerfile" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value if ($null -eq $goVersionDockerfile) { Throw "ERROR: Failed to extract golang version from Dockerfile" } Write-Host -ForegroundColor Green "INFO: Validating GOLang consistency in Dockerfile.windows..." if (-not ($goVersionDockerfile -eq $goVersionDockerfileWindows)) { Throw "ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. $goVersionDockerfile $goVersionDockerfileWindows" } # Build the image if ($null -eq $env:SKIP_IMAGE_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the image from Dockerfile.windows at $(Get-Date)..." Write-Host $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { docker build --build-arg=GO_VERSION -t docker -f Dockerfile.windows . | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build image from Dockerfile.windows" } Write-Host -ForegroundColor Green "INFO: Image build ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the docker image" } # Following at the moment must be docker\docker as it's dictated by dockerfile.Windows $contPath="$COMMITHASH`:c`:\gopath\src\github.com\docker\docker\bundles" # After https://github.com/docker/docker/pull/30290, .git was added to .dockerignore. Therefore # we have to calculate unsupported outside of the container, and pass the commit ID in through # an environment variable for the binary build $CommitUnsupported="" if ($(git status --porcelain --untracked-files=no).Length -ne 0) { $CommitUnsupported="-unsupported" } # Build the binary in a container unless asked to skip it. if ($null -eq $env:SKIP_BINARY_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the test binaries at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" docker rm -f $COMMITHASH 2>&1 | Out-Null if ($CommitUnsupported -ne "") { Write-Host "" Write-Warning "This version is unsupported because there are uncommitted file(s)." Write-Warning "Either commit these changes, or add them to .gitignore." git status --porcelain --untracked-files=no | Write-Warning Write-Host "" } $Duration=$(Measure-Command {docker run --name $COMMITHASH -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -Daemon -Client | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build binary" } Write-Host -ForegroundColor Green "INFO: Binaries build ended at $(Get-Date). Duration`:$Duration" # Copy the binaries and the generated version_autogen.go out of the container $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\docker.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the client binary (docker.exe) to $env:TEMP\binary" } docker cp "$contPath\dockerd.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the daemon binary (dockerd.exe) to $env:TEMP\binary" } docker cp "$COMMITHASH`:c`:\gopath\bin\gotestsum.exe" $env:TEMP\binary\ if (-not (Test-Path "$env:TEMP\binary\gotestsum.exe")) { Throw "ERROR: gotestsum.exe not found...." ` } $ErrorActionPreference = "Stop" # Copy the built dockerd.exe to dockerd-$COMMITHASH.exe so that easily spotted in task manager. Write-Host -ForegroundColor Green "INFO: Copying the built daemon binary to $env:TEMP\binary\dockerd-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\dockerd.exe $env:TEMP\binary\dockerd-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue # Copy the built docker.exe to docker-$COMMITHASH.exe Write-Host -ForegroundColor Green "INFO: Copying the built client binary to $env:TEMP\binary\docker-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\docker.exe $env:TEMP\binary\docker-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the binaries" } Write-Host -ForegroundColor Green "INFO: Copying dockerversion from the container..." $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\..\dockerversion\version_autogen.go" "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the generated version_autogen.go to $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" } $ErrorActionPreference = "Stop" # Grab the golang installer out of the built image. That way, we know we are consistent once extracted and paths set, # so there's no need to re-deploy on account of an upgrade to the version of GO being used in docker. if ($null -eq $env:SKIP_COPY_GO) { Write-Host -ForegroundColor Green "INFO: Copying the golang package from the container to $env:TEMP\installer\go.zip..." docker cp "$COMMITHASH`:c`:\go.zip" $env:TEMP\installer\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the golang installer 'go.zip' from container:c:\go.zip to $env:TEMP\installer" } $ErrorActionPreference = "Stop" # Extract the golang installer Write-Host -ForegroundColor Green "INFO: Extracting go.zip to $env:TEMP\go" $Duration=$(Measure-Command { Expand-Archive $env:TEMP\installer\go.zip $env:TEMP -Force | Out-Null}) Write-Host -ForegroundColor Green "INFO: Extraction ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping copying and extracting golang from the image" } # Set the GOPATH Write-Host -ForegroundColor Green "INFO: Updating the golang and path environment variables" $env:GOPATH="$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR" Write-Host -ForegroundColor Green "INFO: GOPATH=$env:GOPATH" # Set the path to have the version of go from the image at the front $env:PATH="$env:TEMP\go\bin;$env:PATH" # Set the GOROOT to be our copy of go from the image $env:GOROOT="$env:TEMP\go" Write-Host -ForegroundColor Green "INFO: $(go version)" # Work out the -H parameter for the daemon under test (DASHH_DUT) and client under test (DASHH_CUT) #$DASHH_DUT="npipe:////./pipe/$COMMITHASH" # Can't do remote named pipe #$ip = (resolve-dnsname $env:COMPUTERNAME -type A -NoHostsFile -LlmnrNetbiosOnly).IPAddress # Useful to tie down $DASHH_CUT="tcp://127.0.0.1`:2357" # Not a typo for 2375! $DASHH_DUT="tcp://0.0.0.0:2357" # Not a typo for 2375! # Arguments for the daemon under test $dutArgs=@() $dutArgs += "-H $DASHH_DUT" $dutArgs += "--data-root $env:TEMP\daemon" $dutArgs += "--pidfile $env:TEMP\docker.pid" # Save the PID file so we can nuke it if set $pidFile="$env:TEMP\docker.pid" # Arguments: Are we starting the daemon under test in debug mode? if (-not ("$env:DOCKER_DUT_DEBUG" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test in debug mode" $dutArgs += "-D" } # Arguments: Are we starting the daemon under test with Hyper-V containers as the default isolation? if (-not ("$env:DOCKER_DUT_HYPERV" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with Hyper-V containers as the default" $dutArgs += "--exec-opt isolation=hyperv" } # Arguments: Allow setting optional storage-driver options # example usage: DDOCKER_STORAGE_OPTS="size=40G" if (-not ("$env:DOCKER_STORAGE_OPTS" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with storage-driver options ${env:DOCKER_STORAGE_OPTS}" $env:DOCKER_STORAGE_OPTS.Split(",") | ForEach-Object { $dutArgs += "--storage-opt $_" } } # Start the daemon under test, ensuring everything is redirected to folders under $TEMP. # Important - we launch the -$COMMITHASH version so that we can kill it without # killing the control daemon. Write-Host -ForegroundColor Green "INFO: Starting a daemon under test..." Write-Host -ForegroundColor Green "INFO: Args: $dutArgs" New-Item -ItemType Directory $env:TEMP\daemon -ErrorAction SilentlyContinue | Out-Null # Cannot fathom why, but always writes to stderr.... Start-Process "$env:TEMP\binary\dockerd-$COMMITHASH" ` -ArgumentList $dutArgs ` -RedirectStandardOutput "$env:TEMP\dut.out" ` -RedirectStandardError "$env:TEMP\dut.err" Write-Host -ForegroundColor Green "INFO: Process started successfully." $daemonStarted=1 # Start tailing the daemon under test if the command is installed if ($null -ne (Get-Command "tail" -ErrorAction SilentlyContinue)) { Write-Host -ForegroundColor green "INFO: Start tailing logs of the daemon under tests" $tail = Start-Process "tail" -ArgumentList "-f $env:TEMP\dut.out" -PassThru -ErrorAction SilentlyContinue } # Verify we can get the daemon under test to respond $tries=20 Write-Host -ForegroundColor Green "INFO: Waiting for the daemon under test to start..." while ($true) { $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version 2>&1 | Out-Null $ErrorActionPreference = "Stop" if ($LastExitCode -eq 0) { break } $tries-- if ($tries -le 0) { Throw "ERROR: Failed to get a response from the daemon under test" } Write-Host -NoNewline "." sleep 1 } Write-Host -ForegroundColor Green "INFO: Daemon under test started and replied!" # Provide the docker version of the daemon under test for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker info Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker images Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Default to windowsservercore for the base image used for the tests. The "docker" image # and the control daemon use microsoft/windowsservercore regardless. This is *JUST* for the tests. if ($null -eq $env:WINDOWS_BASE_IMAGE) { $env:WINDOWS_BASE_IMAGE="microsoft/windowsservercore" } if ($null -eq $env:WINDOWS_BASE_IMAGE_TAG) { $env:WINDOWS_BASE_IMAGE_TAG="latest" } # Lowercase and make sure it has a microsoft/ prefix $env:WINDOWS_BASE_IMAGE = $env:WINDOWS_BASE_IMAGE.ToLower() if (! $($env:WINDOWS_BASE_IMAGE -Split "/")[0] -match "microsoft") { Throw "ERROR: WINDOWS_BASE_IMAGE should start microsoft/ or mcr.microsoft.com/" } Write-Host -ForegroundColor Green "INFO: Base image for tests is $env:WINDOWS_BASE_IMAGE" $ErrorActionPreference = "SilentlyContinue" if ($((& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images --format "{{.Repository}}:{{.Tag}}" | Select-String "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("c:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar")) { Write-Host -ForegroundColor Green "INFO: Loading"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]".tar from disk into the daemon under test. This may take some time..." $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" load -i $("$readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar into daemon under test") } Write-Host -ForegroundColor Green "INFO: docker load of"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]" into daemon under test completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:tagname Write-Host -ForegroundColor Green $("INFO: Pulling "+$env:WINDOWS_BASE_IMAGE+":"+$env:WINDOWS_BASE_IMAGE_TAG+" from docker hub into daemon under test. This may take some time...") $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage in daemon under test") & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is already loaded in the daemon under test" } # Inspect the pulled or loaded image to get the version directly $ErrorActionPreference = "SilentlyContinue" $dutimgVersion = $(&"$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" inspect "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is '"+$dutimgVersion+"'") # Run the validation tests unless SKIP_VALIDATION_TESTS is defined. if ($null -eq $env:SKIP_VALIDATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running validation tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { hack\make.ps1 -DCO -GoFormat -PkgImports | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Validation tests failed" } Write-Host -ForegroundColor Green "INFO: Validation tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping validation tests" } # Run the unit tests inside a container unless SKIP_UNIT_TESTS is defined if ($null -eq $env:SKIP_UNIT_TESTS) { $ContainerNameForUnitTests = $COMMITHASH + "_UnitTests" Write-Host -ForegroundColor Cyan "INFO: Running unit tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command {docker run --name $ContainerNameForUnitTests -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -TestUnit | Out-Host }) $TestRunExitCode = $LastExitCode $ErrorActionPreference = "Stop" # Saving where jenkins will take a look at..... New-Item -Force -ItemType Directory bundles | Out-Null $unitTestsContPath="$ContainerNameForUnitTests`:c`:\gopath\src\github.com\docker\docker\bundles" $JunitExpectedContFilePath = "$unitTestsContPath\junit-report-unit-tests.xml" docker cp $JunitExpectedContFilePath "bundles" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the unit tests report ($JunitExpectedContFilePath) to bundles" } if (Test-Path "bundles\junit-report-unit-tests.xml") { Write-Host -ForegroundColor Magenta "INFO: Unit tests results(bundles\junit-report-unit-tests.xml) exist. pwd=$pwd" } else { Write-Host -ForegroundColor Magenta "ERROR: Unit tests results(bundles\junit-report-unit-tests.xml) do not exist. pwd=$pwd" } if (-not($TestRunExitCode -eq 0)) { Throw "ERROR: Unit tests failed" } Write-Host -ForegroundColor Green "INFO: Unit tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping unit tests" } # Add the Windows busybox image. Needed for WCOW integration tests if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Green "INFO: Building busybox" $ErrorActionPreference = "SilentlyContinue" $(& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" build -t busybox --build-arg WINDOWS_BASE_IMAGE --build-arg WINDOWS_BASE_IMAGE_TAG "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\contrib\busybox\" | Out-Host) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build busybox image" } Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Run the WCOW integration tests unless SKIP_INTEGRATION_TESTS is defined if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running integration tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" # Location of the daemon under test. $env:OrigDOCKER_HOST="$env:DOCKER_HOST" #https://blogs.technet.microsoft.com/heyscriptingguy/2011/09/20/solve-problems-with-external-command-lines-in-powershell/ is useful to see tokenising $jsonFilePath = "..\\bundles\\go-test-report-intcli-tests.json" $xmlFilePath = "..\\bundles\\junit-report-intcli-tests.xml" $c = "gotestsum --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " if ($null -ne $env:INTEGRATION_TEST_NAME) { # Makes is quicker for debugging to be able to run only a subset of the integration tests $c += "`"-test.run`" " $c += "`"$env:INTEGRATION_TEST_NAME`" " Write-Host -ForegroundColor Magenta "WARN: Only running integration tests matching $env:INTEGRATION_TEST_NAME" } $c += "`"-tags`" " + "`"autogen`" " $c += "`"-test.timeout`" " + "`"200m`" " if ($null -ne $env:INTEGRATION_IN_CONTAINER) { Write-Host -ForegroundColor Green "INFO: Integration tests being run inside a container" # Note we talk back through the containers gateway address # And the ridiculous lengths we have to go to get the default gateway address... (GetNetIPConfiguration doesn't work in nanoserver) # I just could not get the escaping to work in a single command, so output $c to a file and run that in the container instead... # Not the prettiest, but it works. $c | Out-File -Force "$env:TEMP\binary\runIntegrationCLI.ps1" $Duration= $(Measure-Command { & docker run ` --rm ` -e c=$c ` --workdir "c`:\gopath\src\github.com\docker\docker\integration-cli" ` -v "$env:TEMP\binary`:c:\target" ` docker ` "`$env`:PATH`='c`:\target;'+`$env:PATH`; `$env:DOCKER_HOST`='tcp`://'+(ipconfig | select -last 1).Substring(39)+'`:2357'; c:\target\runIntegrationCLI.ps1" | Out-Host } ) } else { $env:DOCKER_HOST=$DASHH_CUT $env:PATH="$env:TEMP\binary;$env:PATH;" # Force to use the test binaries, not the host ones. $env:GO111MODULE="off" Write-Host -ForegroundColor Green "INFO: DOCKER_HOST at $DASHH_CUT" $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Cyan "INFO: Integration API tests being run from the host:" $start=(Get-Date); Invoke-Expression ".\hack\make.ps1 -TestIntegration"; $Duration=New-Timespan -Start $start -End (Get-Date) $IntTestsRunResult = $LastExitCode $ErrorActionPreference = "Stop" if (-not($IntTestsRunResult -eq 0)) { Throw "ERROR: Integration API tests failed at $(Get-Date). Duration`:$Duration" } $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Green "INFO: Integration CLI tests being run from the host:" Write-Host -ForegroundColor Green "INFO: $c" Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\integration-cli" # Explicit to not use measure-command otherwise don't get output as it goes $start=(Get-Date); Invoke-Expression $c; $Duration=New-Timespan -Start $start -End (Get-Date) } $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Integration CLI tests failed at $(Get-Date). Duration`:$Duration" } Write-Host -ForegroundColor Green "INFO: Integration tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping integration tests" } # Docker info now to get counts (after or if jjh/containercounts is merged) if ($daemonStarted -eq 1) { Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test at end of run" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Stop the daemon under test if (Test-Path "$env:TEMP\docker.pid") { $p=Get-Content "$env:TEMP\docker.pid" -raw if (($null -ne $p) -and ($daemonStarted -eq 1)) { Write-Host -ForegroundColor green "INFO: Stopping daemon under test" taskkill -f -t -pid $p #sleep 5 } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } # Stop the tail process (if started) if ($null -ne $tail) { Write-Host -ForegroundColor green "INFO: Stop tailing logs of the daemon under tests" Stop-Process -InputObject $tail -Force } Write-Host -ForegroundColor Green "INFO: executeCI.ps1 Completed successfully at $(Get-Date)." } Catch [Exception] { $FinallyColour="Red" Write-Host -ForegroundColor Red ("`r`n`r`nERROR: Failed '$_' at $(Get-Date)") Write-Host -ForegroundColor Red ($_.InvocationInfo.PositionMessage) Write-Host "`n`n" # Exit to ensure Jenkins captures it. Don't do this in the ISE or interactive Powershell - they will catch the Throw onwards. if ( ([bool]([Environment]::GetCommandLineArgs() -Like '*-NonInteractive*')) -and ` ([bool]([Environment]::GetCommandLineArgs() -NotLike "*Powershell_ISE.exe*"))) { exit 1 } Throw $_ } Finally { $ErrorActionPreference="SilentlyContinue" $global:ProgressPreference=$origProgressPreference Write-Host -ForegroundColor Green "INFO: Tidying up at end of run" # Restore the path if ($null -ne $origPath) { $env:PATH=$origPath } # Restore the DOCKER_HOST if ($null -ne $origDOCKER_HOST) { $env:DOCKER_HOST=$origDOCKER_HOST } # Restore the GOROOT and GOPATH variables if ($null -ne $origGOROOT) { $env:GOROOT=$origGOROOT } if ($null -ne $origGOPATH) { $env:GOPATH=$origGOPATH } # Dump the daemon log. This will include any possible panic stack in the .err. if (($daemonStarted -eq 1) -and ($(Get-Item "$env:TEMP\dut.err").Length -gt 0)) { Write-Host -ForegroundColor Cyan "----------- DAEMON LOG ------------" Get-Content "$env:TEMP\dut.err" -ErrorAction SilentlyContinue | Write-Host -ForegroundColor Cyan Write-Host -ForegroundColor Cyan "----------- END DAEMON LOG --------" } # Save the daemon under test log if ($daemonStarted -eq 1) { Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.out) to bundles\CIDUT.out" Copy-Item "$env:TEMP\dut.out" "bundles\CIDUT.out" -Force -ErrorAction SilentlyContinue Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.err) to bundles\CIDUT.err" Copy-Item "$env:TEMP\dut.err" "bundles\CIDUT.err" -Force -ErrorAction SilentlyContinue } Set-Location "$env:SOURCES_DRIVE\$env:SOURCES_SUBDIR" -ErrorAction SilentlyContinue Nuke-Everything # Restore the TEMP path if ($null -ne $TEMPORIG) { $env:TEMP="$TEMPORIG" } $Dur=New-TimeSpan -Start $StartTime -End $(Get-Date) Write-Host -ForegroundColor $FinallyColour "`nINFO: executeCI.ps1 exiting at $(date). Duration $dur`n" }
thaJeztah
3ad9549e70bdf45b40c6332b221cd5c7fd635524
51b06c6795160d8a1ba05d05d6491df7588b2957
Why the change?
cpuguy83
4,519
moby/moby
42,683
Remove LCOW (step 6)
Splitting off more bits from https://github.com/moby/moby/pull/42170
null
2021-07-27 11:33:51+00:00
2021-07-29 18:34:29+00:00
hack/ci/windows.ps1
# WARNING: When editing this file, consider submitting a PR to # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/executeCI.ps1, and make sure that # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1 isn't broken. # Validate using a test context in Jenkins, then copy/paste into Jenkins production. # # Jenkins CI scripts for Windows to Windows CI (Powershell Version) # By John Howard (@jhowardmsft) January 2016 - bash version; July 2016 Ported to PowerShell $ErrorActionPreference = 'Stop' $StartTime=Get-Date Write-Host -ForegroundColor Red "DEBUG: print all environment variables to check how Jenkins runs this script" $allArgs = [Environment]::GetCommandLineArgs() Write-Host -ForegroundColor Red $allArgs Write-Host -ForegroundColor Red "----------------------------------------------------------------------------" # ------------------------------------------------------------------------------------------- # When executed, we rely on four variables being set in the environment: # # [The reason for being environment variables rather than parameters is historical. No reason # why it couldn't be updated.] # # SOURCES_DRIVE is the drive on which the sources being tested are cloned from. # This should be a straight drive letter, no platform semantics. # For example 'c' # # SOURCES_SUBDIR is the top level directory under SOURCES_DRIVE where the # sources are cloned to. There are no platform semantics in this # as it does not include slashes. # For example 'gopath' # # Based on the above examples, it would be expected that Jenkins # would clone the sources being tested to # SOURCES_DRIVE\SOURCES_SUBDIR\src\github.com\docker\docker, or # c:\gopath\src\github.com\docker\docker # # TESTRUN_DRIVE is the drive where we build the binary on and redirect everything # to for the daemon under test. On an Azure D2 type host which has # an SSD temporary storage D: drive, this is ideal for performance. # For example 'd' # # TESTRUN_SUBDIR is the top level directory under TESTRUN_DRIVE where we redirect # everything to for the daemon under test. For example 'CI'. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\CI-<CommitID> or # d:\CI\CI-<CommitID> # # Optional environment variables help in CI: # # BUILD_NUMBER + BRANCH_NAME are optional variables to be added to the directory below TESTRUN_SUBDIR # to have individual folder per CI build. If some files couldn't be # cleaned up and we want to re-run the build in CI. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\PR-<PR-Number>\<BuildNumber> or # d:\CI\PR-<PR-Number>\<BuildNumber> # # In addition, the following variables can control the run configuration: # # DOCKER_DUT_DEBUG if defined starts the daemon under test in debug mode. # # DOCKER_STORAGE_OPTS comma-separated list of optional storage driver options for the daemon under test # examples: # DOCKER_STORAGE_OPTS="size=40G" # DOCKER_STORAGE_OPTS="lcow.globalmode=false,lcow.kernel=kernel.efi" # # SKIP_VALIDATION_TESTS if defined skips the validation tests # # SKIP_UNIT_TESTS if defined skips the unit tests # # SKIP_INTEGRATION_TESTS if defined skips the integration tests # # SKIP_COPY_GO if defined skips copy the go installer from the image # # DOCKER_DUT_HYPERV if default daemon under test default isolation is hyperv # # INTEGRATION_TEST_NAME to only run partial tests eg "TestInfo*" will only run # any tests starting "TestInfo" # # SKIP_BINARY_BUILD if defined skips building the binary # # SKIP_ZAP_DUT if defined doesn't zap the daemon under test directory # # SKIP_IMAGE_BUILD if defined doesn't build the 'docker' image # # INTEGRATION_IN_CONTAINER if defined, runs the integration tests from inside a container. # As of July 2016, there are known issues with this. # # SKIP_ALL_CLEANUP if defined, skips any cleanup at the start or end of the run # # WINDOWS_BASE_IMAGE if defined, uses that as the base image. Note that the # docker integration tests are also coded to use the same # environment variable, and if no set, defaults to microsoft/windowsservercore # # WINDOWS_BASE_IMAGE_TAG if defined, uses that as the tag name for the base image. # if no set, defaults to latest # # ------------------------------------------------------------------------------------------- # # Jenkins Integration. Add a Windows Powershell build step as follows: # # Write-Host -ForegroundColor green "INFO: Jenkins build step starting" # $CISCRIPT_DEFAULT_LOCATION = "https://raw.githubusercontent.com/moby/moby/master/hack/ci/windows.ps1" # $CISCRIPT_LOCAL_LOCATION = "$env:TEMP\executeCI.ps1" # Write-Host -ForegroundColor green "INFO: Removing cached execution script" # Remove-Item $CISCRIPT_LOCAL_LOCATION -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # $wc = New-Object net.webclient # try { # Write-Host -ForegroundColor green "INFO: Downloading latest execution script..." # $wc.Downloadfile($CISCRIPT_DEFAULT_LOCATION, $CISCRIPT_LOCAL_LOCATION) # } # catch [System.Net.WebException] # { # Throw ("Failed to download: $_") # } # & $CISCRIPT_LOCAL_LOCATION # ------------------------------------------------------------------------------------------- $SCRIPT_VER="05-Feb-2019 09:03 PDT" $FinallyColour="Cyan" #$env:DOCKER_DUT_DEBUG="yes" # Comment out to not be in debug mode #$env:SKIP_UNIT_TESTS="yes" #$env:SKIP_VALIDATION_TESTS="yes" #$env:SKIP_ZAP_DUT="" #$env:SKIP_BINARY_BUILD="yes" #$env:INTEGRATION_TEST_NAME="" #$env:SKIP_IMAGE_BUILD="yes" #$env:SKIP_ALL_CLEANUP="yes" #$env:INTEGRATION_IN_CONTAINER="yes" #$env:WINDOWS_BASE_IMAGE="" #$env:SKIP_COPY_GO="yes" #$env:INTEGRATION_TESTFLAGS="-test.v" Function Nuke-Everything { $ErrorActionPreference = 'SilentlyContinue' try { if ($null -eq $env:SKIP_ALL_CLEANUP) { Write-Host -ForegroundColor green "INFO: Nuke-Everything..." $containerCount = ($(docker ps -aq | Measure-Object -line).Lines) if (-not $LastExitCode -eq 0) { Throw "ERROR: Failed to get container count from control daemon while nuking" } Write-Host -ForegroundColor green "INFO: Container count on control daemon to delete is $containerCount" if ($(docker ps -aq | Measure-Object -line).Lines -gt 0) { docker rm -f $(docker ps -aq) } $allImages = $(docker images --format "{{.Repository}}#{{.ID}}") $toRemove = ($allImages | Select-String -NotMatch "servercore","nanoserver","docker","busybox") $imageCount = ($toRemove | Measure-Object -line).Lines if ($imageCount -gt 0) { Write-Host -Foregroundcolor green "INFO: Non-base image count on control daemon to delete is $imageCount" docker rmi -f ($toRemove | Foreach-Object { $_.ToString().Split("#")[1] }) } } else { Write-Host -ForegroundColor Magenta "WARN: Skipping cleanup of images and containers" } # Kill any spurious daemons. The '-' is IMPORTANT otherwise will kill the control daemon! $pids=$(get-process | where-object {$_.ProcessName -like 'dockerd-*'}).id foreach ($p in $pids) { Write-Host "INFO: Killing daemon with PID $p" Stop-Process -Id $p -Force -ErrorAction SilentlyContinue } if ($null -ne $pidFile) { Write-Host "INFO: Tidying pidfile $pidfile" if (Test-Path $pidFile) { $p=Get-Content $pidFile -raw if ($null -ne $p){ Write-Host -ForegroundColor green "INFO: Stopping possible daemon pid $p" taskkill -f -t -pid $p } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } } Stop-Process -name "cc1" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "link" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "compile" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "ld" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "go" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git-remote-https" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "integration-cli.test" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "tail" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # Detach any VHDs gwmi msvm_mountedstorageimage -namespace root/virtualization/v2 -ErrorAction SilentlyContinue | foreach-object {$_.DetachVirtualHardDisk() } # Stop any compute processes Get-ComputeProcess | Stop-ComputeProcess -Force # Delete the directory using our dangerous utility unless told not to if (Test-Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR") { if (($null -ne $env:SKIP_ZAP_DUT) -or ($null -eq $env:SKIP_ALL_CLEANUP)) { Write-Host -ForegroundColor Green "INFO: Nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" docker-ci-zap "-folder=$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } else { Write-Host -ForegroundColor Magenta "WARN: Skip nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 Production Server workaround - Psched $reg = "HKLM:\System\CurrentControlSet\Services\Psched\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under Psched\Parameters" Write-Warning "Cleaning Psched..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 $reg = "HKLM:\System\CurrentControlSet\Services\WFPLWFS\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under WFPLWFS\Parameters" Write-Warning "Cleaning WFPLWFS..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } } catch { # Don't throw any errors onwards Throw $_ } } Try { Write-Host -ForegroundColor Cyan "`nINFO: executeCI.ps1 starting at $(date)`n" Write-Host -ForegroundColor Green "INFO: Script version $SCRIPT_VER" Set-PSDebug -Trace 0 # 1 to turn on $origPath="$env:PATH" # so we can restore it at the end $origDOCKER_HOST="$DOCKER_HOST" # So we can restore it at the end $origGOROOT="$env:GOROOT" # So we can restore it at the end $origGOPATH="$env:GOPATH" # So we can restore it at the end # Turn off progress bars $origProgressPreference=$global:ProgressPreference $global:ProgressPreference='SilentlyContinue' # Git version Write-Host -ForegroundColor Green "INFO: Running $(git version)" # OS Version $bl=(Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name BuildLabEx).BuildLabEx $a=$bl.ToString().Split(".") $Branch=$a[3] $WindowsBuild=$a[0]+"."+$a[1]+"."+$a[4] Write-Host -ForegroundColor green "INFO: Branch:$Branch Build:$WindowsBuild" # List the environment variables Write-Host -ForegroundColor green "INFO: Environment variables:" Get-ChildItem Env: | Out-String # PR if (-not ($null -eq $env:PR)) { Write-Output "INFO: PR#$env:PR (https://github.com/docker/docker/pull/$env:PR)" } # Make sure docker is installed if ($null -eq (Get-Command "docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker is not installed or not found on path" } # Make sure docker-ci-zap is installed if ($null -eq (Get-Command "docker-ci-zap" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker-ci-zap is not installed or not found on path" } # Make sure Windows Defender is disabled $defender = $false Try { $status = Get-MpComputerStatus if ($status) { if ($status.RealTimeProtectionEnabled) { $defender = $true } } } Catch {} if ($defender) { Write-Host -ForegroundColor Magenta "WARN: Windows Defender real time protection is enabled, which may cause some integration tests to fail" } # Make sure SOURCES_DRIVE is set if ($null -eq $env:SOURCES_DRIVE) { Throw "ERROR: Environment variable SOURCES_DRIVE is not set" } # Make sure TESTRUN_DRIVE is set if ($null -eq $env:TESTRUN_DRIVE) { Throw "ERROR: Environment variable TESTRUN_DRIVE is not set" } # Make sure SOURCES_SUBDIR is set if ($null -eq $env:SOURCES_SUBDIR) { Throw "ERROR: Environment variable SOURCES_SUBDIR is not set" } # Make sure TESTRUN_SUBDIR is set if ($null -eq $env:TESTRUN_SUBDIR) { Throw "ERROR: Environment variable TESTRUN_SUBDIR is not set" } # SOURCES_DRIVE\SOURCES_SUBDIR must be a directory and exist if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR")) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR must be an existing directory" } # Create the TESTRUN_DRIVE\TESTRUN_SUBDIR if it does not already exist New-Item -ItemType Directory -Force -Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" -ErrorAction SilentlyContinue | Out-Null Write-Host -ForegroundColor Green "INFO: Sources under $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\..." Write-Host -ForegroundColor Green "INFO: Test run under $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\..." # Check the intended source location is a directory if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker is not a directory!" } # Make sure we start at the root of the sources Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Running in $(Get-Location)" # Make sure we are in repo if (-not (Test-Path -PathType Leaf -Path ".\Dockerfile.windows")) { Throw "$(Get-Location) does not contain Dockerfile.windows!" } Write-Host -ForegroundColor Green "INFO: docker/docker repository was found" # Make sure microsoft/windowsservercore:latest image is installed in the control daemon. On public CI machines, windowsservercore.tar and nanoserver.tar # are pre-baked and tagged appropriately in the c:\baseimages directory, and can be directly loaded. # Note - this script will only work on 10B (Oct 2016) or later machines! Not 9D or previous due to image tagging assumptions. # # On machines not on Microsoft corpnet, or those which have not been pre-baked, we have to docker pull the image in which case it will # will come in directly as microsoft/windowsservercore:latest. The ultimate goal of all this code is to ensure that whatever, # we have microsoft/windowsservercore:latest # # Note we cannot use (as at Oct 2016) nanoserver as the control daemons base image, even if nanoserver is used in the tests themselves. $ErrorActionPreference = "SilentlyContinue" $ControlDaemonBaseImage="windowsservercore" $readBaseFrom="c" if ($((docker images --format "{{.Repository}}:{{.Tag}}" | Select-String $("microsoft/"+$ControlDaemonBaseImage+":latest") | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("$env:SOURCES_DRIVE`:\baseimages\"+$ControlDaemonBaseImage+".tar")) { # An optimization for CI servers to copy it to the D: drive which is an SSD. if ($env:SOURCES_DRIVE -ne $env:TESTRUN_DRIVE) { $readBaseFrom=$env:TESTRUN_DRIVE if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages")) { New-Item "$env:TESTRUN_DRIVE`:\baseimages" -type directory | Out-Null } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\windowsservercore.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\nanoserver.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\nanoserver.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } $readBaseFrom=$env:TESTRUN_DRIVE } Write-Host -ForegroundColor Green "INFO: Loading"$ControlDaemonBaseImage".tar from disk. This may take some time..." $ErrorActionPreference = "SilentlyContinue" docker load -i $("$readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") } Write-Host -ForegroundColor Green "INFO: docker load of"$ControlDaemonBaseImage" completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:latest Write-Host -ForegroundColor Green $("INFO: Pulling $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG from docker hub. This may take some time...") $ErrorActionPreference = "SilentlyContinue" docker pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage") docker tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image"$("microsoft/"+$ControlDaemonBaseImage+":latest")"is already loaded in the control daemon" } # Inspect the pulled image to get the version directly $ErrorActionPreference = "SilentlyContinue" $imgVersion = $(docker inspect $("microsoft/"+$ControlDaemonBaseImage) --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of microsoft/"+$ControlDaemonBaseImage+":latest is '"+$imgVersion+"'") # Provide the docker version for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker version $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Write-Host Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host -ForegroundColor Green " Failed to get a response from the control daemon. It may be down." Write-Host -ForegroundColor Green " Try re-running this CI job, or ask on #docker-maintainers on docker slack" Write-Host -ForegroundColor Green " to see if the daemon is running. Also check the service configuration." Write-Host -ForegroundColor Green " DOCKER_HOST is set to $DOCKER_HOST." Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Same as above, but docker info Write-Host -ForegroundColor Green "INFO: Docker info of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker info $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Get the commit has and verify we have something $ErrorActionPreference = "SilentlyContinue" $COMMITHASH=$(git rev-parse --short HEAD) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" } Write-Host -ForegroundColor Green "INFO: Commit hash is $COMMITHASH" # Nuke everything and go back to our sources after Nuke-Everything cd "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" # Redirect to a temporary location. $TEMPORIG=$env:TEMP if ($null -eq $env:BUILD_NUMBER) { $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\CI-$COMMITHASH" } else { # individual temporary location per CI build that better matches the BUILD_URL $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\$env:BRANCH_NAME\$env:BUILD_NUMBER" } $env:LOCALAPPDATA="$env:TEMP\localappdata" $errorActionPreference='Stop' New-Item -ItemType Directory "$env:TEMP" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\userprofile" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults\unittests" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\localappdata" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\binary" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\installer" -ErrorAction SilentlyContinue | Out-Null if ($null -eq $env:SKIP_COPY_GO) { # Wipe the previous version of GO - we're going to get it out of the image if (Test-Path "$env:TEMP\go") { Remove-Item "$env:TEMP\go" -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } New-Item -ItemType Directory "$env:TEMP\go" -ErrorAction SilentlyContinue | Out-Null } Write-Host -ForegroundColor Green "INFO: Location for testing is $env:TEMP" # CI Integrity check - ensure Dockerfile.windows and Dockerfile go versions match $goVersionDockerfileWindows=(Select-String -Path ".\Dockerfile.windows" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value $goVersionDockerfile=(Select-String -Path ".\Dockerfile" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value if ($null -eq $goVersionDockerfile) { Throw "ERROR: Failed to extract golang version from Dockerfile" } Write-Host -ForegroundColor Green "INFO: Validating GOLang consistency in Dockerfile.windows..." if (-not ($goVersionDockerfile -eq $goVersionDockerfileWindows)) { Throw "ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. $goVersionDockerfile $goVersionDockerfileWindows" } # Build the image if ($null -eq $env:SKIP_IMAGE_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the image from Dockerfile.windows at $(Get-Date)..." Write-Host $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { docker build --build-arg=GO_VERSION -t docker -f Dockerfile.windows . | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build image from Dockerfile.windows" } Write-Host -ForegroundColor Green "INFO: Image build ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the docker image" } # Following at the moment must be docker\docker as it's dictated by dockerfile.Windows $contPath="$COMMITHASH`:c`:\gopath\src\github.com\docker\docker\bundles" # After https://github.com/docker/docker/pull/30290, .git was added to .dockerignore. Therefore # we have to calculate unsupported outside of the container, and pass the commit ID in through # an environment variable for the binary build $CommitUnsupported="" if ($(git status --porcelain --untracked-files=no).Length -ne 0) { $CommitUnsupported="-unsupported" } # Build the binary in a container unless asked to skip it. if ($null -eq $env:SKIP_BINARY_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the test binaries at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" docker rm -f $COMMITHASH 2>&1 | Out-Null if ($CommitUnsupported -ne "") { Write-Host "" Write-Warning "This version is unsupported because there are uncommitted file(s)." Write-Warning "Either commit these changes, or add them to .gitignore." git status --porcelain --untracked-files=no | Write-Warning Write-Host "" } $Duration=$(Measure-Command {docker run --name $COMMITHASH -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -Daemon -Client | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build binary" } Write-Host -ForegroundColor Green "INFO: Binaries build ended at $(Get-Date). Duration`:$Duration" # Copy the binaries and the generated version_autogen.go out of the container $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\docker.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the client binary (docker.exe) to $env:TEMP\binary" } docker cp "$contPath\dockerd.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the daemon binary (dockerd.exe) to $env:TEMP\binary" } docker cp "$COMMITHASH`:c`:\gopath\bin\gotestsum.exe" $env:TEMP\binary\ if (-not (Test-Path "$env:TEMP\binary\gotestsum.exe")) { Throw "ERROR: gotestsum.exe not found...." ` } $ErrorActionPreference = "Stop" # Copy the built dockerd.exe to dockerd-$COMMITHASH.exe so that easily spotted in task manager. Write-Host -ForegroundColor Green "INFO: Copying the built daemon binary to $env:TEMP\binary\dockerd-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\dockerd.exe $env:TEMP\binary\dockerd-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue # Copy the built docker.exe to docker-$COMMITHASH.exe Write-Host -ForegroundColor Green "INFO: Copying the built client binary to $env:TEMP\binary\docker-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\docker.exe $env:TEMP\binary\docker-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the binaries" } Write-Host -ForegroundColor Green "INFO: Copying dockerversion from the container..." $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\..\dockerversion\version_autogen.go" "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the generated version_autogen.go to $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" } $ErrorActionPreference = "Stop" # Grab the golang installer out of the built image. That way, we know we are consistent once extracted and paths set, # so there's no need to re-deploy on account of an upgrade to the version of GO being used in docker. if ($null -eq $env:SKIP_COPY_GO) { Write-Host -ForegroundColor Green "INFO: Copying the golang package from the container to $env:TEMP\installer\go.zip..." docker cp "$COMMITHASH`:c`:\go.zip" $env:TEMP\installer\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the golang installer 'go.zip' from container:c:\go.zip to $env:TEMP\installer" } $ErrorActionPreference = "Stop" # Extract the golang installer Write-Host -ForegroundColor Green "INFO: Extracting go.zip to $env:TEMP\go" $Duration=$(Measure-Command { Expand-Archive $env:TEMP\installer\go.zip $env:TEMP -Force | Out-Null}) Write-Host -ForegroundColor Green "INFO: Extraction ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping copying and extracting golang from the image" } # Set the GOPATH Write-Host -ForegroundColor Green "INFO: Updating the golang and path environment variables" $env:GOPATH="$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR" Write-Host -ForegroundColor Green "INFO: GOPATH=$env:GOPATH" # Set the path to have the version of go from the image at the front $env:PATH="$env:TEMP\go\bin;$env:PATH" # Set the GOROOT to be our copy of go from the image $env:GOROOT="$env:TEMP\go" Write-Host -ForegroundColor Green "INFO: $(go version)" # Work out the -H parameter for the daemon under test (DASHH_DUT) and client under test (DASHH_CUT) #$DASHH_DUT="npipe:////./pipe/$COMMITHASH" # Can't do remote named pipe #$ip = (resolve-dnsname $env:COMPUTERNAME -type A -NoHostsFile -LlmnrNetbiosOnly).IPAddress # Useful to tie down $DASHH_CUT="tcp://127.0.0.1`:2357" # Not a typo for 2375! $DASHH_DUT="tcp://0.0.0.0:2357" # Not a typo for 2375! # Arguments for the daemon under test $dutArgs=@() $dutArgs += "-H $DASHH_DUT" $dutArgs += "--data-root $env:TEMP\daemon" $dutArgs += "--pidfile $env:TEMP\docker.pid" # Save the PID file so we can nuke it if set $pidFile="$env:TEMP\docker.pid" # Arguments: Are we starting the daemon under test in debug mode? if (-not ("$env:DOCKER_DUT_DEBUG" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test in debug mode" $dutArgs += "-D" } # Arguments: Are we starting the daemon under test with Hyper-V containers as the default isolation? if (-not ("$env:DOCKER_DUT_HYPERV" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with Hyper-V containers as the default" $dutArgs += "--exec-opt isolation=hyperv" } # Arguments: Allow setting optional storage-driver options # example usage: DOCKER_STORAGE_OPTS="lcow.globalmode=false,lcow.kernel=kernel.efi" if (-not ("$env:DOCKER_STORAGE_OPTS" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with storage-driver options ${env:DOCKER_STORAGE_OPTS}" $env:DOCKER_STORAGE_OPTS.Split(",") | ForEach { $dutArgs += "--storage-opt $_" } } # Start the daemon under test, ensuring everything is redirected to folders under $TEMP. # Important - we launch the -$COMMITHASH version so that we can kill it without # killing the control daemon. Write-Host -ForegroundColor Green "INFO: Starting a daemon under test..." Write-Host -ForegroundColor Green "INFO: Args: $dutArgs" New-Item -ItemType Directory $env:TEMP\daemon -ErrorAction SilentlyContinue | Out-Null # Cannot fathom why, but always writes to stderr.... Start-Process "$env:TEMP\binary\dockerd-$COMMITHASH" ` -ArgumentList $dutArgs ` -RedirectStandardOutput "$env:TEMP\dut.out" ` -RedirectStandardError "$env:TEMP\dut.err" Write-Host -ForegroundColor Green "INFO: Process started successfully." $daemonStarted=1 # Start tailing the daemon under test if the command is installed if ($null -ne (Get-Command "tail" -ErrorAction SilentlyContinue)) { Write-Host -ForegroundColor green "INFO: Start tailing logs of the daemon under tests" $tail = Start-Process "tail" -ArgumentList "-f $env:TEMP\dut.out" -PassThru -ErrorAction SilentlyContinue } # Verify we can get the daemon under test to respond $tries=20 Write-Host -ForegroundColor Green "INFO: Waiting for the daemon under test to start..." while ($true) { $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version 2>&1 | Out-Null $ErrorActionPreference = "Stop" if ($LastExitCode -eq 0) { break } $tries-- if ($tries -le 0) { Throw "ERROR: Failed to get a response from the daemon under test" } Write-Host -NoNewline "." sleep 1 } Write-Host -ForegroundColor Green "INFO: Daemon under test started and replied!" # Provide the docker version of the daemon under test for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker info Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker images Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Default to windowsservercore for the base image used for the tests. The "docker" image # and the control daemon use microsoft/windowsservercore regardless. This is *JUST* for the tests. if ($null -eq $env:WINDOWS_BASE_IMAGE) { $env:WINDOWS_BASE_IMAGE="microsoft/windowsservercore" } if ($null -eq $env:WINDOWS_BASE_IMAGE_TAG) { $env:WINDOWS_BASE_IMAGE_TAG="latest" } # Lowercase and make sure it has a microsoft/ prefix $env:WINDOWS_BASE_IMAGE = $env:WINDOWS_BASE_IMAGE.ToLower() if (! $($env:WINDOWS_BASE_IMAGE -Split "/")[0] -match "microsoft") { Throw "ERROR: WINDOWS_BASE_IMAGE should start microsoft/ or mcr.microsoft.com/" } Write-Host -ForegroundColor Green "INFO: Base image for tests is $env:WINDOWS_BASE_IMAGE" $ErrorActionPreference = "SilentlyContinue" if ($((& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images --format "{{.Repository}}:{{.Tag}}" | Select-String "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("c:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar")) { Write-Host -ForegroundColor Green "INFO: Loading"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]".tar from disk into the daemon under test. This may take some time..." $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" load -i $("$readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar into daemon under test") } Write-Host -ForegroundColor Green "INFO: docker load of"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]" into daemon under test completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:tagname Write-Host -ForegroundColor Green $("INFO: Pulling "+$env:WINDOWS_BASE_IMAGE+":"+$env:WINDOWS_BASE_IMAGE_TAG+" from docker hub into daemon under test. This may take some time...") $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage in daemon under test") & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is already loaded in the daemon under test" } # Inspect the pulled or loaded image to get the version directly $ErrorActionPreference = "SilentlyContinue" $dutimgVersion = $(&"$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" inspect "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is '"+$dutimgVersion+"'") # Run the validation tests unless SKIP_VALIDATION_TESTS is defined. if ($null -eq $env:SKIP_VALIDATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running validation tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { hack\make.ps1 -DCO -GoFormat -PkgImports | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Validation tests failed" } Write-Host -ForegroundColor Green "INFO: Validation tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping validation tests" } # Run the unit tests inside a container unless SKIP_UNIT_TESTS is defined if ($null -eq $env:SKIP_UNIT_TESTS) { $ContainerNameForUnitTests = $COMMITHASH + "_UnitTests" Write-Host -ForegroundColor Cyan "INFO: Running unit tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command {docker run --name $ContainerNameForUnitTests -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -TestUnit | Out-Host }) $TestRunExitCode = $LastExitCode $ErrorActionPreference = "Stop" # Saving where jenkins will take a look at..... New-Item -Force -ItemType Directory bundles | Out-Null $unitTestsContPath="$ContainerNameForUnitTests`:c`:\gopath\src\github.com\docker\docker\bundles" $JunitExpectedContFilePath = "$unitTestsContPath\junit-report-unit-tests.xml" docker cp $JunitExpectedContFilePath "bundles" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the unit tests report ($JunitExpectedContFilePath) to bundles" } if (Test-Path "bundles\junit-report-unit-tests.xml") { Write-Host -ForegroundColor Magenta "INFO: Unit tests results(bundles\junit-report-unit-tests.xml) exist. pwd=$pwd" } else { Write-Host -ForegroundColor Magenta "ERROR: Unit tests results(bundles\junit-report-unit-tests.xml) do not exist. pwd=$pwd" } if (-not($TestRunExitCode -eq 0)) { Throw "ERROR: Unit tests failed" } Write-Host -ForegroundColor Green "INFO: Unit tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping unit tests" } # Add the Windows busybox image. Needed for WCOW integration tests if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Green "INFO: Building busybox" $ErrorActionPreference = "SilentlyContinue" $(& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" build -t busybox --build-arg WINDOWS_BASE_IMAGE --build-arg WINDOWS_BASE_IMAGE_TAG "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\contrib\busybox\" | Out-Host) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build busybox image" } Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Run the WCOW integration tests unless SKIP_INTEGRATION_TESTS is defined if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running integration tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" # Location of the daemon under test. $env:OrigDOCKER_HOST="$env:DOCKER_HOST" #https://blogs.technet.microsoft.com/heyscriptingguy/2011/09/20/solve-problems-with-external-command-lines-in-powershell/ is useful to see tokenising $jsonFilePath = "..\\bundles\\go-test-report-intcli-tests.json" $xmlFilePath = "..\\bundles\\junit-report-intcli-tests.xml" $c = "gotestsum --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " if ($null -ne $env:INTEGRATION_TEST_NAME) { # Makes is quicker for debugging to be able to run only a subset of the integration tests $c += "`"-test.run`" " $c += "`"$env:INTEGRATION_TEST_NAME`" " Write-Host -ForegroundColor Magenta "WARN: Only running integration tests matching $env:INTEGRATION_TEST_NAME" } $c += "`"-tags`" " + "`"autogen`" " $c += "`"-test.timeout`" " + "`"200m`" " if ($null -ne $env:INTEGRATION_IN_CONTAINER) { Write-Host -ForegroundColor Green "INFO: Integration tests being run inside a container" # Note we talk back through the containers gateway address # And the ridiculous lengths we have to go to get the default gateway address... (GetNetIPConfiguration doesn't work in nanoserver) # I just could not get the escaping to work in a single command, so output $c to a file and run that in the container instead... # Not the prettiest, but it works. $c | Out-File -Force "$env:TEMP\binary\runIntegrationCLI.ps1" $Duration= $(Measure-Command { & docker run ` --rm ` -e c=$c ` --workdir "c`:\gopath\src\github.com\docker\docker\integration-cli" ` -v "$env:TEMP\binary`:c:\target" ` docker ` "`$env`:PATH`='c`:\target;'+`$env:PATH`; `$env:DOCKER_HOST`='tcp`://'+(ipconfig | select -last 1).Substring(39)+'`:2357'; c:\target\runIntegrationCLI.ps1" | Out-Host } ) } else { $env:DOCKER_HOST=$DASHH_CUT $env:PATH="$env:TEMP\binary;$env:PATH;" # Force to use the test binaries, not the host ones. $env:GO111MODULE="off" Write-Host -ForegroundColor Green "INFO: DOCKER_HOST at $DASHH_CUT" $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Cyan "INFO: Integration API tests being run from the host:" $start=(Get-Date); Invoke-Expression ".\hack\make.ps1 -TestIntegration"; $Duration=New-Timespan -Start $start -End (Get-Date) $IntTestsRunResult = $LastExitCode $ErrorActionPreference = "Stop" if (-not($IntTestsRunResult -eq 0)) { Throw "ERROR: Integration API tests failed at $(Get-Date). Duration`:$Duration" } $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Green "INFO: Integration CLI tests being run from the host:" Write-Host -ForegroundColor Green "INFO: $c" Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\integration-cli" # Explicit to not use measure-command otherwise don't get output as it goes $start=(Get-Date); Invoke-Expression $c; $Duration=New-Timespan -Start $start -End (Get-Date) } $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Integration CLI tests failed at $(Get-Date). Duration`:$Duration" } Write-Host -ForegroundColor Green "INFO: Integration tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping integration tests" } # Docker info now to get counts (after or if jjh/containercounts is merged) if ($daemonStarted -eq 1) { Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test at end of run" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Stop the daemon under test if (Test-Path "$env:TEMP\docker.pid") { $p=Get-Content "$env:TEMP\docker.pid" -raw if (($null -ne $p) -and ($daemonStarted -eq 1)) { Write-Host -ForegroundColor green "INFO: Stopping daemon under test" taskkill -f -t -pid $p #sleep 5 } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } # Stop the tail process (if started) if ($null -ne $tail) { Write-Host -ForegroundColor green "INFO: Stop tailing logs of the daemon under tests" Stop-Process -InputObject $tail -Force } Write-Host -ForegroundColor Green "INFO: executeCI.ps1 Completed successfully at $(Get-Date)." } Catch [Exception] { $FinallyColour="Red" Write-Host -ForegroundColor Red ("`r`n`r`nERROR: Failed '$_' at $(Get-Date)") Write-Host -ForegroundColor Red ($_.InvocationInfo.PositionMessage) Write-Host "`n`n" # Exit to ensure Jenkins captures it. Don't do this in the ISE or interactive Powershell - they will catch the Throw onwards. if ( ([bool]([Environment]::GetCommandLineArgs() -Like '*-NonInteractive*')) -and ` ([bool]([Environment]::GetCommandLineArgs() -NotLike "*Powershell_ISE.exe*"))) { exit 1 } Throw $_ } Finally { $ErrorActionPreference="SilentlyContinue" $global:ProgressPreference=$origProgressPreference Write-Host -ForegroundColor Green "INFO: Tidying up at end of run" # Restore the path if ($null -ne $origPath) { $env:PATH=$origPath } # Restore the DOCKER_HOST if ($null -ne $origDOCKER_HOST) { $env:DOCKER_HOST=$origDOCKER_HOST } # Restore the GOROOT and GOPATH variables if ($null -ne $origGOROOT) { $env:GOROOT=$origGOROOT } if ($null -ne $origGOPATH) { $env:GOPATH=$origGOPATH } # Dump the daemon log. This will include any possible panic stack in the .err. if (($daemonStarted -eq 1) -and ($(Get-Item "$env:TEMP\dut.err").Length -gt 0)) { Write-Host -ForegroundColor Cyan "----------- DAEMON LOG ------------" Get-Content "$env:TEMP\dut.err" -ErrorAction SilentlyContinue | Write-Host -ForegroundColor Cyan Write-Host -ForegroundColor Cyan "----------- END DAEMON LOG --------" } # Save the daemon under test log if ($daemonStarted -eq 1) { Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.out) to bundles\CIDUT.out" Copy-Item "$env:TEMP\dut.out" "bundles\CIDUT.out" -Force -ErrorAction SilentlyContinue Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.err) to bundles\CIDUT.err" Copy-Item "$env:TEMP\dut.err" "bundles\CIDUT.err" -Force -ErrorAction SilentlyContinue } Set-Location "$env:SOURCES_DRIVE\$env:SOURCES_SUBDIR" -ErrorAction SilentlyContinue Nuke-Everything # Restore the TEMP path if ($null -ne $TEMPORIG) { $env:TEMP="$TEMPORIG" } $Dur=New-TimeSpan -Start $StartTime -End $(Get-Date) Write-Host -ForegroundColor $FinallyColour "`nINFO: executeCI.ps1 exiting at $(date). Duration $dur`n" }
# WARNING: When editing this file, consider submitting a PR to # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/executeCI.ps1, and make sure that # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1 isn't broken. # Validate using a test context in Jenkins, then copy/paste into Jenkins production. # # Jenkins CI scripts for Windows to Windows CI (Powershell Version) # By John Howard (@jhowardmsft) January 2016 - bash version; July 2016 Ported to PowerShell $ErrorActionPreference = 'Stop' $StartTime=Get-Date Write-Host -ForegroundColor Red "DEBUG: print all environment variables to check how Jenkins runs this script" $allArgs = [Environment]::GetCommandLineArgs() Write-Host -ForegroundColor Red $allArgs Write-Host -ForegroundColor Red "----------------------------------------------------------------------------" # ------------------------------------------------------------------------------------------- # When executed, we rely on four variables being set in the environment: # # [The reason for being environment variables rather than parameters is historical. No reason # why it couldn't be updated.] # # SOURCES_DRIVE is the drive on which the sources being tested are cloned from. # This should be a straight drive letter, no platform semantics. # For example 'c' # # SOURCES_SUBDIR is the top level directory under SOURCES_DRIVE where the # sources are cloned to. There are no platform semantics in this # as it does not include slashes. # For example 'gopath' # # Based on the above examples, it would be expected that Jenkins # would clone the sources being tested to # SOURCES_DRIVE\SOURCES_SUBDIR\src\github.com\docker\docker, or # c:\gopath\src\github.com\docker\docker # # TESTRUN_DRIVE is the drive where we build the binary on and redirect everything # to for the daemon under test. On an Azure D2 type host which has # an SSD temporary storage D: drive, this is ideal for performance. # For example 'd' # # TESTRUN_SUBDIR is the top level directory under TESTRUN_DRIVE where we redirect # everything to for the daemon under test. For example 'CI'. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\CI-<CommitID> or # d:\CI\CI-<CommitID> # # Optional environment variables help in CI: # # BUILD_NUMBER + BRANCH_NAME are optional variables to be added to the directory below TESTRUN_SUBDIR # to have individual folder per CI build. If some files couldn't be # cleaned up and we want to re-run the build in CI. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\PR-<PR-Number>\<BuildNumber> or # d:\CI\PR-<PR-Number>\<BuildNumber> # # In addition, the following variables can control the run configuration: # # DOCKER_DUT_DEBUG if defined starts the daemon under test in debug mode. # # DOCKER_STORAGE_OPTS comma-separated list of optional storage driver options for the daemon under test # examples: # DOCKER_STORAGE_OPTS="size=40G" # # SKIP_VALIDATION_TESTS if defined skips the validation tests # # SKIP_UNIT_TESTS if defined skips the unit tests # # SKIP_INTEGRATION_TESTS if defined skips the integration tests # # SKIP_COPY_GO if defined skips copy the go installer from the image # # DOCKER_DUT_HYPERV if default daemon under test default isolation is hyperv # # INTEGRATION_TEST_NAME to only run partial tests eg "TestInfo*" will only run # any tests starting "TestInfo" # # SKIP_BINARY_BUILD if defined skips building the binary # # SKIP_ZAP_DUT if defined doesn't zap the daemon under test directory # # SKIP_IMAGE_BUILD if defined doesn't build the 'docker' image # # INTEGRATION_IN_CONTAINER if defined, runs the integration tests from inside a container. # As of July 2016, there are known issues with this. # # SKIP_ALL_CLEANUP if defined, skips any cleanup at the start or end of the run # # WINDOWS_BASE_IMAGE if defined, uses that as the base image. Note that the # docker integration tests are also coded to use the same # environment variable, and if no set, defaults to microsoft/windowsservercore # # WINDOWS_BASE_IMAGE_TAG if defined, uses that as the tag name for the base image. # if no set, defaults to latest # # ------------------------------------------------------------------------------------------- # # Jenkins Integration. Add a Windows Powershell build step as follows: # # Write-Host -ForegroundColor green "INFO: Jenkins build step starting" # $CISCRIPT_DEFAULT_LOCATION = "https://raw.githubusercontent.com/moby/moby/master/hack/ci/windows.ps1" # $CISCRIPT_LOCAL_LOCATION = "$env:TEMP\executeCI.ps1" # Write-Host -ForegroundColor green "INFO: Removing cached execution script" # Remove-Item $CISCRIPT_LOCAL_LOCATION -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # $wc = New-Object net.webclient # try { # Write-Host -ForegroundColor green "INFO: Downloading latest execution script..." # $wc.Downloadfile($CISCRIPT_DEFAULT_LOCATION, $CISCRIPT_LOCAL_LOCATION) # } # catch [System.Net.WebException] # { # Throw ("Failed to download: $_") # } # & $CISCRIPT_LOCAL_LOCATION # ------------------------------------------------------------------------------------------- $SCRIPT_VER="05-Feb-2019 09:03 PDT" $FinallyColour="Cyan" #$env:DOCKER_DUT_DEBUG="yes" # Comment out to not be in debug mode #$env:SKIP_UNIT_TESTS="yes" #$env:SKIP_VALIDATION_TESTS="yes" #$env:SKIP_ZAP_DUT="" #$env:SKIP_BINARY_BUILD="yes" #$env:INTEGRATION_TEST_NAME="" #$env:SKIP_IMAGE_BUILD="yes" #$env:SKIP_ALL_CLEANUP="yes" #$env:INTEGRATION_IN_CONTAINER="yes" #$env:WINDOWS_BASE_IMAGE="" #$env:SKIP_COPY_GO="yes" #$env:INTEGRATION_TESTFLAGS="-test.v" Function Nuke-Everything { $ErrorActionPreference = 'SilentlyContinue' try { if ($null -eq $env:SKIP_ALL_CLEANUP) { Write-Host -ForegroundColor green "INFO: Nuke-Everything..." $containerCount = ($(docker ps -aq | Measure-Object -line).Lines) if (-not $LastExitCode -eq 0) { Throw "ERROR: Failed to get container count from control daemon while nuking" } Write-Host -ForegroundColor green "INFO: Container count on control daemon to delete is $containerCount" if ($(docker ps -aq | Measure-Object -line).Lines -gt 0) { docker rm -f $(docker ps -aq) } $allImages = $(docker images --format "{{.Repository}}#{{.ID}}") $toRemove = ($allImages | Select-String -NotMatch "servercore","nanoserver","docker","busybox") $imageCount = ($toRemove | Measure-Object -line).Lines if ($imageCount -gt 0) { Write-Host -Foregroundcolor green "INFO: Non-base image count on control daemon to delete is $imageCount" docker rmi -f ($toRemove | Foreach-Object { $_.ToString().Split("#")[1] }) } } else { Write-Host -ForegroundColor Magenta "WARN: Skipping cleanup of images and containers" } # Kill any spurious daemons. The '-' is IMPORTANT otherwise will kill the control daemon! $pids=$(get-process | where-object {$_.ProcessName -like 'dockerd-*'}).id foreach ($p in $pids) { Write-Host "INFO: Killing daemon with PID $p" Stop-Process -Id $p -Force -ErrorAction SilentlyContinue } if ($null -ne $pidFile) { Write-Host "INFO: Tidying pidfile $pidfile" if (Test-Path $pidFile) { $p=Get-Content $pidFile -raw if ($null -ne $p){ Write-Host -ForegroundColor green "INFO: Stopping possible daemon pid $p" taskkill -f -t -pid $p } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } } Stop-Process -name "cc1" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "link" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "compile" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "ld" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "go" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git-remote-https" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "integration-cli.test" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "tail" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # Detach any VHDs gwmi msvm_mountedstorageimage -namespace root/virtualization/v2 -ErrorAction SilentlyContinue | foreach-Object {$_.DetachVirtualHardDisk() } # Stop any compute processes Get-ComputeProcess | Stop-ComputeProcess -Force # Delete the directory using our dangerous utility unless told not to if (Test-Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR") { if (($null -ne $env:SKIP_ZAP_DUT) -or ($null -eq $env:SKIP_ALL_CLEANUP)) { Write-Host -ForegroundColor Green "INFO: Nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" docker-ci-zap "-folder=$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } else { Write-Host -ForegroundColor Magenta "WARN: Skip nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 Production Server workaround - Psched $reg = "HKLM:\System\CurrentControlSet\Services\Psched\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under Psched\Parameters" Write-Warning "Cleaning Psched..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 $reg = "HKLM:\System\CurrentControlSet\Services\WFPLWFS\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under WFPLWFS\Parameters" Write-Warning "Cleaning WFPLWFS..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } } catch { # Don't throw any errors onwards Throw $_ } } Try { Write-Host -ForegroundColor Cyan "`nINFO: executeCI.ps1 starting at $(date)`n" Write-Host -ForegroundColor Green "INFO: Script version $SCRIPT_VER" Set-PSDebug -Trace 0 # 1 to turn on $origPath="$env:PATH" # so we can restore it at the end $origDOCKER_HOST="$DOCKER_HOST" # So we can restore it at the end $origGOROOT="$env:GOROOT" # So we can restore it at the end $origGOPATH="$env:GOPATH" # So we can restore it at the end # Turn off progress bars $origProgressPreference=$global:ProgressPreference $global:ProgressPreference='SilentlyContinue' # Git version Write-Host -ForegroundColor Green "INFO: Running $(git version)" # OS Version $bl=(Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name BuildLabEx).BuildLabEx $a=$bl.ToString().Split(".") $Branch=$a[3] $WindowsBuild=$a[0]+"."+$a[1]+"."+$a[4] Write-Host -ForegroundColor green "INFO: Branch:$Branch Build:$WindowsBuild" # List the environment variables Write-Host -ForegroundColor green "INFO: Environment variables:" Get-ChildItem Env: | Out-String # PR if (-not ($null -eq $env:PR)) { Write-Output "INFO: PR#$env:PR (https://github.com/docker/docker/pull/$env:PR)" } # Make sure docker is installed if ($null -eq (Get-Command "docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker is not installed or not found on path" } # Make sure docker-ci-zap is installed if ($null -eq (Get-Command "docker-ci-zap" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker-ci-zap is not installed or not found on path" } # Make sure Windows Defender is disabled $defender = $false Try { $status = Get-MpComputerStatus if ($status) { if ($status.RealTimeProtectionEnabled) { $defender = $true } } } Catch {} if ($defender) { Write-Host -ForegroundColor Magenta "WARN: Windows Defender real time protection is enabled, which may cause some integration tests to fail" } # Make sure SOURCES_DRIVE is set if ($null -eq $env:SOURCES_DRIVE) { Throw "ERROR: Environment variable SOURCES_DRIVE is not set" } # Make sure TESTRUN_DRIVE is set if ($null -eq $env:TESTRUN_DRIVE) { Throw "ERROR: Environment variable TESTRUN_DRIVE is not set" } # Make sure SOURCES_SUBDIR is set if ($null -eq $env:SOURCES_SUBDIR) { Throw "ERROR: Environment variable SOURCES_SUBDIR is not set" } # Make sure TESTRUN_SUBDIR is set if ($null -eq $env:TESTRUN_SUBDIR) { Throw "ERROR: Environment variable TESTRUN_SUBDIR is not set" } # SOURCES_DRIVE\SOURCES_SUBDIR must be a directory and exist if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR")) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR must be an existing directory" } # Create the TESTRUN_DRIVE\TESTRUN_SUBDIR if it does not already exist New-Item -ItemType Directory -Force -Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" -ErrorAction SilentlyContinue | Out-Null Write-Host -ForegroundColor Green "INFO: Sources under $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\..." Write-Host -ForegroundColor Green "INFO: Test run under $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\..." # Check the intended source location is a directory if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker is not a directory!" } # Make sure we start at the root of the sources Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Running in $(Get-Location)" # Make sure we are in repo if (-not (Test-Path -PathType Leaf -Path ".\Dockerfile.windows")) { Throw "$(Get-Location) does not contain Dockerfile.windows!" } Write-Host -ForegroundColor Green "INFO: docker/docker repository was found" # Make sure microsoft/windowsservercore:latest image is installed in the control daemon. On public CI machines, windowsservercore.tar and nanoserver.tar # are pre-baked and tagged appropriately in the c:\baseimages directory, and can be directly loaded. # Note - this script will only work on 10B (Oct 2016) or later machines! Not 9D or previous due to image tagging assumptions. # # On machines not on Microsoft corpnet, or those which have not been pre-baked, we have to docker pull the image in which case it will # will come in directly as microsoft/windowsservercore:latest. The ultimate goal of all this code is to ensure that whatever, # we have microsoft/windowsservercore:latest # # Note we cannot use (as at Oct 2016) nanoserver as the control daemons base image, even if nanoserver is used in the tests themselves. $ErrorActionPreference = "SilentlyContinue" $ControlDaemonBaseImage="windowsservercore" $readBaseFrom="c" if ($((docker images --format "{{.Repository}}:{{.Tag}}" | Select-String $("microsoft/"+$ControlDaemonBaseImage+":latest") | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("$env:SOURCES_DRIVE`:\baseimages\"+$ControlDaemonBaseImage+".tar")) { # An optimization for CI servers to copy it to the D: drive which is an SSD. if ($env:SOURCES_DRIVE -ne $env:TESTRUN_DRIVE) { $readBaseFrom=$env:TESTRUN_DRIVE if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages")) { New-Item "$env:TESTRUN_DRIVE`:\baseimages" -type directory | Out-Null } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\windowsservercore.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\nanoserver.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\nanoserver.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } $readBaseFrom=$env:TESTRUN_DRIVE } Write-Host -ForegroundColor Green "INFO: Loading"$ControlDaemonBaseImage".tar from disk. This may take some time..." $ErrorActionPreference = "SilentlyContinue" docker load -i $("$readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") } Write-Host -ForegroundColor Green "INFO: docker load of"$ControlDaemonBaseImage" completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:latest Write-Host -ForegroundColor Green $("INFO: Pulling $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG from docker hub. This may take some time...") $ErrorActionPreference = "SilentlyContinue" docker pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage") docker tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image"$("microsoft/"+$ControlDaemonBaseImage+":latest")"is already loaded in the control daemon" } # Inspect the pulled image to get the version directly $ErrorActionPreference = "SilentlyContinue" $imgVersion = $(docker inspect $("microsoft/"+$ControlDaemonBaseImage) --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of microsoft/"+$ControlDaemonBaseImage+":latest is '"+$imgVersion+"'") # Provide the docker version for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker version $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Write-Host Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host -ForegroundColor Green " Failed to get a response from the control daemon. It may be down." Write-Host -ForegroundColor Green " Try re-running this CI job, or ask on #docker-maintainers on docker slack" Write-Host -ForegroundColor Green " to see if the daemon is running. Also check the service configuration." Write-Host -ForegroundColor Green " DOCKER_HOST is set to $DOCKER_HOST." Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Same as above, but docker info Write-Host -ForegroundColor Green "INFO: Docker info of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker info $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Get the commit has and verify we have something $ErrorActionPreference = "SilentlyContinue" $COMMITHASH=$(git rev-parse --short HEAD) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" } Write-Host -ForegroundColor Green "INFO: Commit hash is $COMMITHASH" # Nuke everything and go back to our sources after Nuke-Everything cd "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" # Redirect to a temporary location. $TEMPORIG=$env:TEMP if ($null -eq $env:BUILD_NUMBER) { $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\CI-$COMMITHASH" } else { # individual temporary location per CI build that better matches the BUILD_URL $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\$env:BRANCH_NAME\$env:BUILD_NUMBER" } $env:LOCALAPPDATA="$env:TEMP\localappdata" $errorActionPreference='Stop' New-Item -ItemType Directory "$env:TEMP" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\userprofile" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults\unittests" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\localappdata" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\binary" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\installer" -ErrorAction SilentlyContinue | Out-Null if ($null -eq $env:SKIP_COPY_GO) { # Wipe the previous version of GO - we're going to get it out of the image if (Test-Path "$env:TEMP\go") { Remove-Item "$env:TEMP\go" -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } New-Item -ItemType Directory "$env:TEMP\go" -ErrorAction SilentlyContinue | Out-Null } Write-Host -ForegroundColor Green "INFO: Location for testing is $env:TEMP" # CI Integrity check - ensure Dockerfile.windows and Dockerfile go versions match $goVersionDockerfileWindows=(Select-String -Path ".\Dockerfile.windows" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value $goVersionDockerfile=(Select-String -Path ".\Dockerfile" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value if ($null -eq $goVersionDockerfile) { Throw "ERROR: Failed to extract golang version from Dockerfile" } Write-Host -ForegroundColor Green "INFO: Validating GOLang consistency in Dockerfile.windows..." if (-not ($goVersionDockerfile -eq $goVersionDockerfileWindows)) { Throw "ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. $goVersionDockerfile $goVersionDockerfileWindows" } # Build the image if ($null -eq $env:SKIP_IMAGE_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the image from Dockerfile.windows at $(Get-Date)..." Write-Host $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { docker build --build-arg=GO_VERSION -t docker -f Dockerfile.windows . | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build image from Dockerfile.windows" } Write-Host -ForegroundColor Green "INFO: Image build ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the docker image" } # Following at the moment must be docker\docker as it's dictated by dockerfile.Windows $contPath="$COMMITHASH`:c`:\gopath\src\github.com\docker\docker\bundles" # After https://github.com/docker/docker/pull/30290, .git was added to .dockerignore. Therefore # we have to calculate unsupported outside of the container, and pass the commit ID in through # an environment variable for the binary build $CommitUnsupported="" if ($(git status --porcelain --untracked-files=no).Length -ne 0) { $CommitUnsupported="-unsupported" } # Build the binary in a container unless asked to skip it. if ($null -eq $env:SKIP_BINARY_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the test binaries at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" docker rm -f $COMMITHASH 2>&1 | Out-Null if ($CommitUnsupported -ne "") { Write-Host "" Write-Warning "This version is unsupported because there are uncommitted file(s)." Write-Warning "Either commit these changes, or add them to .gitignore." git status --porcelain --untracked-files=no | Write-Warning Write-Host "" } $Duration=$(Measure-Command {docker run --name $COMMITHASH -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -Daemon -Client | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build binary" } Write-Host -ForegroundColor Green "INFO: Binaries build ended at $(Get-Date). Duration`:$Duration" # Copy the binaries and the generated version_autogen.go out of the container $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\docker.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the client binary (docker.exe) to $env:TEMP\binary" } docker cp "$contPath\dockerd.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the daemon binary (dockerd.exe) to $env:TEMP\binary" } docker cp "$COMMITHASH`:c`:\gopath\bin\gotestsum.exe" $env:TEMP\binary\ if (-not (Test-Path "$env:TEMP\binary\gotestsum.exe")) { Throw "ERROR: gotestsum.exe not found...." ` } $ErrorActionPreference = "Stop" # Copy the built dockerd.exe to dockerd-$COMMITHASH.exe so that easily spotted in task manager. Write-Host -ForegroundColor Green "INFO: Copying the built daemon binary to $env:TEMP\binary\dockerd-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\dockerd.exe $env:TEMP\binary\dockerd-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue # Copy the built docker.exe to docker-$COMMITHASH.exe Write-Host -ForegroundColor Green "INFO: Copying the built client binary to $env:TEMP\binary\docker-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\docker.exe $env:TEMP\binary\docker-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the binaries" } Write-Host -ForegroundColor Green "INFO: Copying dockerversion from the container..." $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\..\dockerversion\version_autogen.go" "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the generated version_autogen.go to $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" } $ErrorActionPreference = "Stop" # Grab the golang installer out of the built image. That way, we know we are consistent once extracted and paths set, # so there's no need to re-deploy on account of an upgrade to the version of GO being used in docker. if ($null -eq $env:SKIP_COPY_GO) { Write-Host -ForegroundColor Green "INFO: Copying the golang package from the container to $env:TEMP\installer\go.zip..." docker cp "$COMMITHASH`:c`:\go.zip" $env:TEMP\installer\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the golang installer 'go.zip' from container:c:\go.zip to $env:TEMP\installer" } $ErrorActionPreference = "Stop" # Extract the golang installer Write-Host -ForegroundColor Green "INFO: Extracting go.zip to $env:TEMP\go" $Duration=$(Measure-Command { Expand-Archive $env:TEMP\installer\go.zip $env:TEMP -Force | Out-Null}) Write-Host -ForegroundColor Green "INFO: Extraction ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping copying and extracting golang from the image" } # Set the GOPATH Write-Host -ForegroundColor Green "INFO: Updating the golang and path environment variables" $env:GOPATH="$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR" Write-Host -ForegroundColor Green "INFO: GOPATH=$env:GOPATH" # Set the path to have the version of go from the image at the front $env:PATH="$env:TEMP\go\bin;$env:PATH" # Set the GOROOT to be our copy of go from the image $env:GOROOT="$env:TEMP\go" Write-Host -ForegroundColor Green "INFO: $(go version)" # Work out the -H parameter for the daemon under test (DASHH_DUT) and client under test (DASHH_CUT) #$DASHH_DUT="npipe:////./pipe/$COMMITHASH" # Can't do remote named pipe #$ip = (resolve-dnsname $env:COMPUTERNAME -type A -NoHostsFile -LlmnrNetbiosOnly).IPAddress # Useful to tie down $DASHH_CUT="tcp://127.0.0.1`:2357" # Not a typo for 2375! $DASHH_DUT="tcp://0.0.0.0:2357" # Not a typo for 2375! # Arguments for the daemon under test $dutArgs=@() $dutArgs += "-H $DASHH_DUT" $dutArgs += "--data-root $env:TEMP\daemon" $dutArgs += "--pidfile $env:TEMP\docker.pid" # Save the PID file so we can nuke it if set $pidFile="$env:TEMP\docker.pid" # Arguments: Are we starting the daemon under test in debug mode? if (-not ("$env:DOCKER_DUT_DEBUG" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test in debug mode" $dutArgs += "-D" } # Arguments: Are we starting the daemon under test with Hyper-V containers as the default isolation? if (-not ("$env:DOCKER_DUT_HYPERV" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with Hyper-V containers as the default" $dutArgs += "--exec-opt isolation=hyperv" } # Arguments: Allow setting optional storage-driver options # example usage: DDOCKER_STORAGE_OPTS="size=40G" if (-not ("$env:DOCKER_STORAGE_OPTS" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with storage-driver options ${env:DOCKER_STORAGE_OPTS}" $env:DOCKER_STORAGE_OPTS.Split(",") | ForEach-Object { $dutArgs += "--storage-opt $_" } } # Start the daemon under test, ensuring everything is redirected to folders under $TEMP. # Important - we launch the -$COMMITHASH version so that we can kill it without # killing the control daemon. Write-Host -ForegroundColor Green "INFO: Starting a daemon under test..." Write-Host -ForegroundColor Green "INFO: Args: $dutArgs" New-Item -ItemType Directory $env:TEMP\daemon -ErrorAction SilentlyContinue | Out-Null # Cannot fathom why, but always writes to stderr.... Start-Process "$env:TEMP\binary\dockerd-$COMMITHASH" ` -ArgumentList $dutArgs ` -RedirectStandardOutput "$env:TEMP\dut.out" ` -RedirectStandardError "$env:TEMP\dut.err" Write-Host -ForegroundColor Green "INFO: Process started successfully." $daemonStarted=1 # Start tailing the daemon under test if the command is installed if ($null -ne (Get-Command "tail" -ErrorAction SilentlyContinue)) { Write-Host -ForegroundColor green "INFO: Start tailing logs of the daemon under tests" $tail = Start-Process "tail" -ArgumentList "-f $env:TEMP\dut.out" -PassThru -ErrorAction SilentlyContinue } # Verify we can get the daemon under test to respond $tries=20 Write-Host -ForegroundColor Green "INFO: Waiting for the daemon under test to start..." while ($true) { $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version 2>&1 | Out-Null $ErrorActionPreference = "Stop" if ($LastExitCode -eq 0) { break } $tries-- if ($tries -le 0) { Throw "ERROR: Failed to get a response from the daemon under test" } Write-Host -NoNewline "." sleep 1 } Write-Host -ForegroundColor Green "INFO: Daemon under test started and replied!" # Provide the docker version of the daemon under test for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker info Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker images Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Default to windowsservercore for the base image used for the tests. The "docker" image # and the control daemon use microsoft/windowsservercore regardless. This is *JUST* for the tests. if ($null -eq $env:WINDOWS_BASE_IMAGE) { $env:WINDOWS_BASE_IMAGE="microsoft/windowsservercore" } if ($null -eq $env:WINDOWS_BASE_IMAGE_TAG) { $env:WINDOWS_BASE_IMAGE_TAG="latest" } # Lowercase and make sure it has a microsoft/ prefix $env:WINDOWS_BASE_IMAGE = $env:WINDOWS_BASE_IMAGE.ToLower() if (! $($env:WINDOWS_BASE_IMAGE -Split "/")[0] -match "microsoft") { Throw "ERROR: WINDOWS_BASE_IMAGE should start microsoft/ or mcr.microsoft.com/" } Write-Host -ForegroundColor Green "INFO: Base image for tests is $env:WINDOWS_BASE_IMAGE" $ErrorActionPreference = "SilentlyContinue" if ($((& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images --format "{{.Repository}}:{{.Tag}}" | Select-String "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("c:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar")) { Write-Host -ForegroundColor Green "INFO: Loading"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]".tar from disk into the daemon under test. This may take some time..." $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" load -i $("$readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar into daemon under test") } Write-Host -ForegroundColor Green "INFO: docker load of"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]" into daemon under test completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:tagname Write-Host -ForegroundColor Green $("INFO: Pulling "+$env:WINDOWS_BASE_IMAGE+":"+$env:WINDOWS_BASE_IMAGE_TAG+" from docker hub into daemon under test. This may take some time...") $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage in daemon under test") & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is already loaded in the daemon under test" } # Inspect the pulled or loaded image to get the version directly $ErrorActionPreference = "SilentlyContinue" $dutimgVersion = $(&"$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" inspect "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is '"+$dutimgVersion+"'") # Run the validation tests unless SKIP_VALIDATION_TESTS is defined. if ($null -eq $env:SKIP_VALIDATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running validation tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { hack\make.ps1 -DCO -GoFormat -PkgImports | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Validation tests failed" } Write-Host -ForegroundColor Green "INFO: Validation tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping validation tests" } # Run the unit tests inside a container unless SKIP_UNIT_TESTS is defined if ($null -eq $env:SKIP_UNIT_TESTS) { $ContainerNameForUnitTests = $COMMITHASH + "_UnitTests" Write-Host -ForegroundColor Cyan "INFO: Running unit tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command {docker run --name $ContainerNameForUnitTests -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -TestUnit | Out-Host }) $TestRunExitCode = $LastExitCode $ErrorActionPreference = "Stop" # Saving where jenkins will take a look at..... New-Item -Force -ItemType Directory bundles | Out-Null $unitTestsContPath="$ContainerNameForUnitTests`:c`:\gopath\src\github.com\docker\docker\bundles" $JunitExpectedContFilePath = "$unitTestsContPath\junit-report-unit-tests.xml" docker cp $JunitExpectedContFilePath "bundles" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the unit tests report ($JunitExpectedContFilePath) to bundles" } if (Test-Path "bundles\junit-report-unit-tests.xml") { Write-Host -ForegroundColor Magenta "INFO: Unit tests results(bundles\junit-report-unit-tests.xml) exist. pwd=$pwd" } else { Write-Host -ForegroundColor Magenta "ERROR: Unit tests results(bundles\junit-report-unit-tests.xml) do not exist. pwd=$pwd" } if (-not($TestRunExitCode -eq 0)) { Throw "ERROR: Unit tests failed" } Write-Host -ForegroundColor Green "INFO: Unit tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping unit tests" } # Add the Windows busybox image. Needed for WCOW integration tests if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Green "INFO: Building busybox" $ErrorActionPreference = "SilentlyContinue" $(& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" build -t busybox --build-arg WINDOWS_BASE_IMAGE --build-arg WINDOWS_BASE_IMAGE_TAG "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\contrib\busybox\" | Out-Host) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build busybox image" } Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Run the WCOW integration tests unless SKIP_INTEGRATION_TESTS is defined if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running integration tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" # Location of the daemon under test. $env:OrigDOCKER_HOST="$env:DOCKER_HOST" #https://blogs.technet.microsoft.com/heyscriptingguy/2011/09/20/solve-problems-with-external-command-lines-in-powershell/ is useful to see tokenising $jsonFilePath = "..\\bundles\\go-test-report-intcli-tests.json" $xmlFilePath = "..\\bundles\\junit-report-intcli-tests.xml" $c = "gotestsum --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " if ($null -ne $env:INTEGRATION_TEST_NAME) { # Makes is quicker for debugging to be able to run only a subset of the integration tests $c += "`"-test.run`" " $c += "`"$env:INTEGRATION_TEST_NAME`" " Write-Host -ForegroundColor Magenta "WARN: Only running integration tests matching $env:INTEGRATION_TEST_NAME" } $c += "`"-tags`" " + "`"autogen`" " $c += "`"-test.timeout`" " + "`"200m`" " if ($null -ne $env:INTEGRATION_IN_CONTAINER) { Write-Host -ForegroundColor Green "INFO: Integration tests being run inside a container" # Note we talk back through the containers gateway address # And the ridiculous lengths we have to go to get the default gateway address... (GetNetIPConfiguration doesn't work in nanoserver) # I just could not get the escaping to work in a single command, so output $c to a file and run that in the container instead... # Not the prettiest, but it works. $c | Out-File -Force "$env:TEMP\binary\runIntegrationCLI.ps1" $Duration= $(Measure-Command { & docker run ` --rm ` -e c=$c ` --workdir "c`:\gopath\src\github.com\docker\docker\integration-cli" ` -v "$env:TEMP\binary`:c:\target" ` docker ` "`$env`:PATH`='c`:\target;'+`$env:PATH`; `$env:DOCKER_HOST`='tcp`://'+(ipconfig | select -last 1).Substring(39)+'`:2357'; c:\target\runIntegrationCLI.ps1" | Out-Host } ) } else { $env:DOCKER_HOST=$DASHH_CUT $env:PATH="$env:TEMP\binary;$env:PATH;" # Force to use the test binaries, not the host ones. $env:GO111MODULE="off" Write-Host -ForegroundColor Green "INFO: DOCKER_HOST at $DASHH_CUT" $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Cyan "INFO: Integration API tests being run from the host:" $start=(Get-Date); Invoke-Expression ".\hack\make.ps1 -TestIntegration"; $Duration=New-Timespan -Start $start -End (Get-Date) $IntTestsRunResult = $LastExitCode $ErrorActionPreference = "Stop" if (-not($IntTestsRunResult -eq 0)) { Throw "ERROR: Integration API tests failed at $(Get-Date). Duration`:$Duration" } $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Green "INFO: Integration CLI tests being run from the host:" Write-Host -ForegroundColor Green "INFO: $c" Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\integration-cli" # Explicit to not use measure-command otherwise don't get output as it goes $start=(Get-Date); Invoke-Expression $c; $Duration=New-Timespan -Start $start -End (Get-Date) } $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Integration CLI tests failed at $(Get-Date). Duration`:$Duration" } Write-Host -ForegroundColor Green "INFO: Integration tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping integration tests" } # Docker info now to get counts (after or if jjh/containercounts is merged) if ($daemonStarted -eq 1) { Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test at end of run" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Stop the daemon under test if (Test-Path "$env:TEMP\docker.pid") { $p=Get-Content "$env:TEMP\docker.pid" -raw if (($null -ne $p) -and ($daemonStarted -eq 1)) { Write-Host -ForegroundColor green "INFO: Stopping daemon under test" taskkill -f -t -pid $p #sleep 5 } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } # Stop the tail process (if started) if ($null -ne $tail) { Write-Host -ForegroundColor green "INFO: Stop tailing logs of the daemon under tests" Stop-Process -InputObject $tail -Force } Write-Host -ForegroundColor Green "INFO: executeCI.ps1 Completed successfully at $(Get-Date)." } Catch [Exception] { $FinallyColour="Red" Write-Host -ForegroundColor Red ("`r`n`r`nERROR: Failed '$_' at $(Get-Date)") Write-Host -ForegroundColor Red ($_.InvocationInfo.PositionMessage) Write-Host "`n`n" # Exit to ensure Jenkins captures it. Don't do this in the ISE or interactive Powershell - they will catch the Throw onwards. if ( ([bool]([Environment]::GetCommandLineArgs() -Like '*-NonInteractive*')) -and ` ([bool]([Environment]::GetCommandLineArgs() -NotLike "*Powershell_ISE.exe*"))) { exit 1 } Throw $_ } Finally { $ErrorActionPreference="SilentlyContinue" $global:ProgressPreference=$origProgressPreference Write-Host -ForegroundColor Green "INFO: Tidying up at end of run" # Restore the path if ($null -ne $origPath) { $env:PATH=$origPath } # Restore the DOCKER_HOST if ($null -ne $origDOCKER_HOST) { $env:DOCKER_HOST=$origDOCKER_HOST } # Restore the GOROOT and GOPATH variables if ($null -ne $origGOROOT) { $env:GOROOT=$origGOROOT } if ($null -ne $origGOPATH) { $env:GOPATH=$origGOPATH } # Dump the daemon log. This will include any possible panic stack in the .err. if (($daemonStarted -eq 1) -and ($(Get-Item "$env:TEMP\dut.err").Length -gt 0)) { Write-Host -ForegroundColor Cyan "----------- DAEMON LOG ------------" Get-Content "$env:TEMP\dut.err" -ErrorAction SilentlyContinue | Write-Host -ForegroundColor Cyan Write-Host -ForegroundColor Cyan "----------- END DAEMON LOG --------" } # Save the daemon under test log if ($daemonStarted -eq 1) { Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.out) to bundles\CIDUT.out" Copy-Item "$env:TEMP\dut.out" "bundles\CIDUT.out" -Force -ErrorAction SilentlyContinue Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.err) to bundles\CIDUT.err" Copy-Item "$env:TEMP\dut.err" "bundles\CIDUT.err" -Force -ErrorAction SilentlyContinue } Set-Location "$env:SOURCES_DRIVE\$env:SOURCES_SUBDIR" -ErrorAction SilentlyContinue Nuke-Everything # Restore the TEMP path if ($null -ne $TEMPORIG) { $env:TEMP="$TEMPORIG" } $Dur=New-TimeSpan -Start $StartTime -End $(Get-Date) Write-Host -ForegroundColor $FinallyColour "`nINFO: executeCI.ps1 exiting at $(date). Duration $dur`n" }
thaJeztah
3ad9549e70bdf45b40c6332b221cd5c7fd635524
51b06c6795160d8a1ba05d05d6491df7588b2957
Linter complaining that Foreach is an alias, and that should not be used as it could lead to unpredictable behavior
thaJeztah
4,520
moby/moby
42,683
Remove LCOW (step 6)
Splitting off more bits from https://github.com/moby/moby/pull/42170
null
2021-07-27 11:33:51+00:00
2021-07-29 18:34:29+00:00
hack/ci/windows.ps1
# WARNING: When editing this file, consider submitting a PR to # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/executeCI.ps1, and make sure that # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1 isn't broken. # Validate using a test context in Jenkins, then copy/paste into Jenkins production. # # Jenkins CI scripts for Windows to Windows CI (Powershell Version) # By John Howard (@jhowardmsft) January 2016 - bash version; July 2016 Ported to PowerShell $ErrorActionPreference = 'Stop' $StartTime=Get-Date Write-Host -ForegroundColor Red "DEBUG: print all environment variables to check how Jenkins runs this script" $allArgs = [Environment]::GetCommandLineArgs() Write-Host -ForegroundColor Red $allArgs Write-Host -ForegroundColor Red "----------------------------------------------------------------------------" # ------------------------------------------------------------------------------------------- # When executed, we rely on four variables being set in the environment: # # [The reason for being environment variables rather than parameters is historical. No reason # why it couldn't be updated.] # # SOURCES_DRIVE is the drive on which the sources being tested are cloned from. # This should be a straight drive letter, no platform semantics. # For example 'c' # # SOURCES_SUBDIR is the top level directory under SOURCES_DRIVE where the # sources are cloned to. There are no platform semantics in this # as it does not include slashes. # For example 'gopath' # # Based on the above examples, it would be expected that Jenkins # would clone the sources being tested to # SOURCES_DRIVE\SOURCES_SUBDIR\src\github.com\docker\docker, or # c:\gopath\src\github.com\docker\docker # # TESTRUN_DRIVE is the drive where we build the binary on and redirect everything # to for the daemon under test. On an Azure D2 type host which has # an SSD temporary storage D: drive, this is ideal for performance. # For example 'd' # # TESTRUN_SUBDIR is the top level directory under TESTRUN_DRIVE where we redirect # everything to for the daemon under test. For example 'CI'. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\CI-<CommitID> or # d:\CI\CI-<CommitID> # # Optional environment variables help in CI: # # BUILD_NUMBER + BRANCH_NAME are optional variables to be added to the directory below TESTRUN_SUBDIR # to have individual folder per CI build. If some files couldn't be # cleaned up and we want to re-run the build in CI. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\PR-<PR-Number>\<BuildNumber> or # d:\CI\PR-<PR-Number>\<BuildNumber> # # In addition, the following variables can control the run configuration: # # DOCKER_DUT_DEBUG if defined starts the daemon under test in debug mode. # # DOCKER_STORAGE_OPTS comma-separated list of optional storage driver options for the daemon under test # examples: # DOCKER_STORAGE_OPTS="size=40G" # DOCKER_STORAGE_OPTS="lcow.globalmode=false,lcow.kernel=kernel.efi" # # SKIP_VALIDATION_TESTS if defined skips the validation tests # # SKIP_UNIT_TESTS if defined skips the unit tests # # SKIP_INTEGRATION_TESTS if defined skips the integration tests # # SKIP_COPY_GO if defined skips copy the go installer from the image # # DOCKER_DUT_HYPERV if default daemon under test default isolation is hyperv # # INTEGRATION_TEST_NAME to only run partial tests eg "TestInfo*" will only run # any tests starting "TestInfo" # # SKIP_BINARY_BUILD if defined skips building the binary # # SKIP_ZAP_DUT if defined doesn't zap the daemon under test directory # # SKIP_IMAGE_BUILD if defined doesn't build the 'docker' image # # INTEGRATION_IN_CONTAINER if defined, runs the integration tests from inside a container. # As of July 2016, there are known issues with this. # # SKIP_ALL_CLEANUP if defined, skips any cleanup at the start or end of the run # # WINDOWS_BASE_IMAGE if defined, uses that as the base image. Note that the # docker integration tests are also coded to use the same # environment variable, and if no set, defaults to microsoft/windowsservercore # # WINDOWS_BASE_IMAGE_TAG if defined, uses that as the tag name for the base image. # if no set, defaults to latest # # ------------------------------------------------------------------------------------------- # # Jenkins Integration. Add a Windows Powershell build step as follows: # # Write-Host -ForegroundColor green "INFO: Jenkins build step starting" # $CISCRIPT_DEFAULT_LOCATION = "https://raw.githubusercontent.com/moby/moby/master/hack/ci/windows.ps1" # $CISCRIPT_LOCAL_LOCATION = "$env:TEMP\executeCI.ps1" # Write-Host -ForegroundColor green "INFO: Removing cached execution script" # Remove-Item $CISCRIPT_LOCAL_LOCATION -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # $wc = New-Object net.webclient # try { # Write-Host -ForegroundColor green "INFO: Downloading latest execution script..." # $wc.Downloadfile($CISCRIPT_DEFAULT_LOCATION, $CISCRIPT_LOCAL_LOCATION) # } # catch [System.Net.WebException] # { # Throw ("Failed to download: $_") # } # & $CISCRIPT_LOCAL_LOCATION # ------------------------------------------------------------------------------------------- $SCRIPT_VER="05-Feb-2019 09:03 PDT" $FinallyColour="Cyan" #$env:DOCKER_DUT_DEBUG="yes" # Comment out to not be in debug mode #$env:SKIP_UNIT_TESTS="yes" #$env:SKIP_VALIDATION_TESTS="yes" #$env:SKIP_ZAP_DUT="" #$env:SKIP_BINARY_BUILD="yes" #$env:INTEGRATION_TEST_NAME="" #$env:SKIP_IMAGE_BUILD="yes" #$env:SKIP_ALL_CLEANUP="yes" #$env:INTEGRATION_IN_CONTAINER="yes" #$env:WINDOWS_BASE_IMAGE="" #$env:SKIP_COPY_GO="yes" #$env:INTEGRATION_TESTFLAGS="-test.v" Function Nuke-Everything { $ErrorActionPreference = 'SilentlyContinue' try { if ($null -eq $env:SKIP_ALL_CLEANUP) { Write-Host -ForegroundColor green "INFO: Nuke-Everything..." $containerCount = ($(docker ps -aq | Measure-Object -line).Lines) if (-not $LastExitCode -eq 0) { Throw "ERROR: Failed to get container count from control daemon while nuking" } Write-Host -ForegroundColor green "INFO: Container count on control daemon to delete is $containerCount" if ($(docker ps -aq | Measure-Object -line).Lines -gt 0) { docker rm -f $(docker ps -aq) } $allImages = $(docker images --format "{{.Repository}}#{{.ID}}") $toRemove = ($allImages | Select-String -NotMatch "servercore","nanoserver","docker","busybox") $imageCount = ($toRemove | Measure-Object -line).Lines if ($imageCount -gt 0) { Write-Host -Foregroundcolor green "INFO: Non-base image count on control daemon to delete is $imageCount" docker rmi -f ($toRemove | Foreach-Object { $_.ToString().Split("#")[1] }) } } else { Write-Host -ForegroundColor Magenta "WARN: Skipping cleanup of images and containers" } # Kill any spurious daemons. The '-' is IMPORTANT otherwise will kill the control daemon! $pids=$(get-process | where-object {$_.ProcessName -like 'dockerd-*'}).id foreach ($p in $pids) { Write-Host "INFO: Killing daemon with PID $p" Stop-Process -Id $p -Force -ErrorAction SilentlyContinue } if ($null -ne $pidFile) { Write-Host "INFO: Tidying pidfile $pidfile" if (Test-Path $pidFile) { $p=Get-Content $pidFile -raw if ($null -ne $p){ Write-Host -ForegroundColor green "INFO: Stopping possible daemon pid $p" taskkill -f -t -pid $p } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } } Stop-Process -name "cc1" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "link" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "compile" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "ld" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "go" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git-remote-https" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "integration-cli.test" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "tail" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # Detach any VHDs gwmi msvm_mountedstorageimage -namespace root/virtualization/v2 -ErrorAction SilentlyContinue | foreach-object {$_.DetachVirtualHardDisk() } # Stop any compute processes Get-ComputeProcess | Stop-ComputeProcess -Force # Delete the directory using our dangerous utility unless told not to if (Test-Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR") { if (($null -ne $env:SKIP_ZAP_DUT) -or ($null -eq $env:SKIP_ALL_CLEANUP)) { Write-Host -ForegroundColor Green "INFO: Nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" docker-ci-zap "-folder=$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } else { Write-Host -ForegroundColor Magenta "WARN: Skip nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 Production Server workaround - Psched $reg = "HKLM:\System\CurrentControlSet\Services\Psched\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under Psched\Parameters" Write-Warning "Cleaning Psched..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 $reg = "HKLM:\System\CurrentControlSet\Services\WFPLWFS\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under WFPLWFS\Parameters" Write-Warning "Cleaning WFPLWFS..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } } catch { # Don't throw any errors onwards Throw $_ } } Try { Write-Host -ForegroundColor Cyan "`nINFO: executeCI.ps1 starting at $(date)`n" Write-Host -ForegroundColor Green "INFO: Script version $SCRIPT_VER" Set-PSDebug -Trace 0 # 1 to turn on $origPath="$env:PATH" # so we can restore it at the end $origDOCKER_HOST="$DOCKER_HOST" # So we can restore it at the end $origGOROOT="$env:GOROOT" # So we can restore it at the end $origGOPATH="$env:GOPATH" # So we can restore it at the end # Turn off progress bars $origProgressPreference=$global:ProgressPreference $global:ProgressPreference='SilentlyContinue' # Git version Write-Host -ForegroundColor Green "INFO: Running $(git version)" # OS Version $bl=(Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name BuildLabEx).BuildLabEx $a=$bl.ToString().Split(".") $Branch=$a[3] $WindowsBuild=$a[0]+"."+$a[1]+"."+$a[4] Write-Host -ForegroundColor green "INFO: Branch:$Branch Build:$WindowsBuild" # List the environment variables Write-Host -ForegroundColor green "INFO: Environment variables:" Get-ChildItem Env: | Out-String # PR if (-not ($null -eq $env:PR)) { Write-Output "INFO: PR#$env:PR (https://github.com/docker/docker/pull/$env:PR)" } # Make sure docker is installed if ($null -eq (Get-Command "docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker is not installed or not found on path" } # Make sure docker-ci-zap is installed if ($null -eq (Get-Command "docker-ci-zap" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker-ci-zap is not installed or not found on path" } # Make sure Windows Defender is disabled $defender = $false Try { $status = Get-MpComputerStatus if ($status) { if ($status.RealTimeProtectionEnabled) { $defender = $true } } } Catch {} if ($defender) { Write-Host -ForegroundColor Magenta "WARN: Windows Defender real time protection is enabled, which may cause some integration tests to fail" } # Make sure SOURCES_DRIVE is set if ($null -eq $env:SOURCES_DRIVE) { Throw "ERROR: Environment variable SOURCES_DRIVE is not set" } # Make sure TESTRUN_DRIVE is set if ($null -eq $env:TESTRUN_DRIVE) { Throw "ERROR: Environment variable TESTRUN_DRIVE is not set" } # Make sure SOURCES_SUBDIR is set if ($null -eq $env:SOURCES_SUBDIR) { Throw "ERROR: Environment variable SOURCES_SUBDIR is not set" } # Make sure TESTRUN_SUBDIR is set if ($null -eq $env:TESTRUN_SUBDIR) { Throw "ERROR: Environment variable TESTRUN_SUBDIR is not set" } # SOURCES_DRIVE\SOURCES_SUBDIR must be a directory and exist if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR")) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR must be an existing directory" } # Create the TESTRUN_DRIVE\TESTRUN_SUBDIR if it does not already exist New-Item -ItemType Directory -Force -Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" -ErrorAction SilentlyContinue | Out-Null Write-Host -ForegroundColor Green "INFO: Sources under $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\..." Write-Host -ForegroundColor Green "INFO: Test run under $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\..." # Check the intended source location is a directory if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker is not a directory!" } # Make sure we start at the root of the sources Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Running in $(Get-Location)" # Make sure we are in repo if (-not (Test-Path -PathType Leaf -Path ".\Dockerfile.windows")) { Throw "$(Get-Location) does not contain Dockerfile.windows!" } Write-Host -ForegroundColor Green "INFO: docker/docker repository was found" # Make sure microsoft/windowsservercore:latest image is installed in the control daemon. On public CI machines, windowsservercore.tar and nanoserver.tar # are pre-baked and tagged appropriately in the c:\baseimages directory, and can be directly loaded. # Note - this script will only work on 10B (Oct 2016) or later machines! Not 9D or previous due to image tagging assumptions. # # On machines not on Microsoft corpnet, or those which have not been pre-baked, we have to docker pull the image in which case it will # will come in directly as microsoft/windowsservercore:latest. The ultimate goal of all this code is to ensure that whatever, # we have microsoft/windowsservercore:latest # # Note we cannot use (as at Oct 2016) nanoserver as the control daemons base image, even if nanoserver is used in the tests themselves. $ErrorActionPreference = "SilentlyContinue" $ControlDaemonBaseImage="windowsservercore" $readBaseFrom="c" if ($((docker images --format "{{.Repository}}:{{.Tag}}" | Select-String $("microsoft/"+$ControlDaemonBaseImage+":latest") | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("$env:SOURCES_DRIVE`:\baseimages\"+$ControlDaemonBaseImage+".tar")) { # An optimization for CI servers to copy it to the D: drive which is an SSD. if ($env:SOURCES_DRIVE -ne $env:TESTRUN_DRIVE) { $readBaseFrom=$env:TESTRUN_DRIVE if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages")) { New-Item "$env:TESTRUN_DRIVE`:\baseimages" -type directory | Out-Null } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\windowsservercore.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\nanoserver.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\nanoserver.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } $readBaseFrom=$env:TESTRUN_DRIVE } Write-Host -ForegroundColor Green "INFO: Loading"$ControlDaemonBaseImage".tar from disk. This may take some time..." $ErrorActionPreference = "SilentlyContinue" docker load -i $("$readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") } Write-Host -ForegroundColor Green "INFO: docker load of"$ControlDaemonBaseImage" completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:latest Write-Host -ForegroundColor Green $("INFO: Pulling $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG from docker hub. This may take some time...") $ErrorActionPreference = "SilentlyContinue" docker pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage") docker tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image"$("microsoft/"+$ControlDaemonBaseImage+":latest")"is already loaded in the control daemon" } # Inspect the pulled image to get the version directly $ErrorActionPreference = "SilentlyContinue" $imgVersion = $(docker inspect $("microsoft/"+$ControlDaemonBaseImage) --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of microsoft/"+$ControlDaemonBaseImage+":latest is '"+$imgVersion+"'") # Provide the docker version for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker version $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Write-Host Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host -ForegroundColor Green " Failed to get a response from the control daemon. It may be down." Write-Host -ForegroundColor Green " Try re-running this CI job, or ask on #docker-maintainers on docker slack" Write-Host -ForegroundColor Green " to see if the daemon is running. Also check the service configuration." Write-Host -ForegroundColor Green " DOCKER_HOST is set to $DOCKER_HOST." Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Same as above, but docker info Write-Host -ForegroundColor Green "INFO: Docker info of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker info $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Get the commit has and verify we have something $ErrorActionPreference = "SilentlyContinue" $COMMITHASH=$(git rev-parse --short HEAD) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" } Write-Host -ForegroundColor Green "INFO: Commit hash is $COMMITHASH" # Nuke everything and go back to our sources after Nuke-Everything cd "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" # Redirect to a temporary location. $TEMPORIG=$env:TEMP if ($null -eq $env:BUILD_NUMBER) { $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\CI-$COMMITHASH" } else { # individual temporary location per CI build that better matches the BUILD_URL $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\$env:BRANCH_NAME\$env:BUILD_NUMBER" } $env:LOCALAPPDATA="$env:TEMP\localappdata" $errorActionPreference='Stop' New-Item -ItemType Directory "$env:TEMP" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\userprofile" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults\unittests" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\localappdata" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\binary" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\installer" -ErrorAction SilentlyContinue | Out-Null if ($null -eq $env:SKIP_COPY_GO) { # Wipe the previous version of GO - we're going to get it out of the image if (Test-Path "$env:TEMP\go") { Remove-Item "$env:TEMP\go" -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } New-Item -ItemType Directory "$env:TEMP\go" -ErrorAction SilentlyContinue | Out-Null } Write-Host -ForegroundColor Green "INFO: Location for testing is $env:TEMP" # CI Integrity check - ensure Dockerfile.windows and Dockerfile go versions match $goVersionDockerfileWindows=(Select-String -Path ".\Dockerfile.windows" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value $goVersionDockerfile=(Select-String -Path ".\Dockerfile" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value if ($null -eq $goVersionDockerfile) { Throw "ERROR: Failed to extract golang version from Dockerfile" } Write-Host -ForegroundColor Green "INFO: Validating GOLang consistency in Dockerfile.windows..." if (-not ($goVersionDockerfile -eq $goVersionDockerfileWindows)) { Throw "ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. $goVersionDockerfile $goVersionDockerfileWindows" } # Build the image if ($null -eq $env:SKIP_IMAGE_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the image from Dockerfile.windows at $(Get-Date)..." Write-Host $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { docker build --build-arg=GO_VERSION -t docker -f Dockerfile.windows . | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build image from Dockerfile.windows" } Write-Host -ForegroundColor Green "INFO: Image build ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the docker image" } # Following at the moment must be docker\docker as it's dictated by dockerfile.Windows $contPath="$COMMITHASH`:c`:\gopath\src\github.com\docker\docker\bundles" # After https://github.com/docker/docker/pull/30290, .git was added to .dockerignore. Therefore # we have to calculate unsupported outside of the container, and pass the commit ID in through # an environment variable for the binary build $CommitUnsupported="" if ($(git status --porcelain --untracked-files=no).Length -ne 0) { $CommitUnsupported="-unsupported" } # Build the binary in a container unless asked to skip it. if ($null -eq $env:SKIP_BINARY_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the test binaries at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" docker rm -f $COMMITHASH 2>&1 | Out-Null if ($CommitUnsupported -ne "") { Write-Host "" Write-Warning "This version is unsupported because there are uncommitted file(s)." Write-Warning "Either commit these changes, or add them to .gitignore." git status --porcelain --untracked-files=no | Write-Warning Write-Host "" } $Duration=$(Measure-Command {docker run --name $COMMITHASH -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -Daemon -Client | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build binary" } Write-Host -ForegroundColor Green "INFO: Binaries build ended at $(Get-Date). Duration`:$Duration" # Copy the binaries and the generated version_autogen.go out of the container $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\docker.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the client binary (docker.exe) to $env:TEMP\binary" } docker cp "$contPath\dockerd.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the daemon binary (dockerd.exe) to $env:TEMP\binary" } docker cp "$COMMITHASH`:c`:\gopath\bin\gotestsum.exe" $env:TEMP\binary\ if (-not (Test-Path "$env:TEMP\binary\gotestsum.exe")) { Throw "ERROR: gotestsum.exe not found...." ` } $ErrorActionPreference = "Stop" # Copy the built dockerd.exe to dockerd-$COMMITHASH.exe so that easily spotted in task manager. Write-Host -ForegroundColor Green "INFO: Copying the built daemon binary to $env:TEMP\binary\dockerd-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\dockerd.exe $env:TEMP\binary\dockerd-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue # Copy the built docker.exe to docker-$COMMITHASH.exe Write-Host -ForegroundColor Green "INFO: Copying the built client binary to $env:TEMP\binary\docker-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\docker.exe $env:TEMP\binary\docker-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the binaries" } Write-Host -ForegroundColor Green "INFO: Copying dockerversion from the container..." $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\..\dockerversion\version_autogen.go" "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the generated version_autogen.go to $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" } $ErrorActionPreference = "Stop" # Grab the golang installer out of the built image. That way, we know we are consistent once extracted and paths set, # so there's no need to re-deploy on account of an upgrade to the version of GO being used in docker. if ($null -eq $env:SKIP_COPY_GO) { Write-Host -ForegroundColor Green "INFO: Copying the golang package from the container to $env:TEMP\installer\go.zip..." docker cp "$COMMITHASH`:c`:\go.zip" $env:TEMP\installer\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the golang installer 'go.zip' from container:c:\go.zip to $env:TEMP\installer" } $ErrorActionPreference = "Stop" # Extract the golang installer Write-Host -ForegroundColor Green "INFO: Extracting go.zip to $env:TEMP\go" $Duration=$(Measure-Command { Expand-Archive $env:TEMP\installer\go.zip $env:TEMP -Force | Out-Null}) Write-Host -ForegroundColor Green "INFO: Extraction ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping copying and extracting golang from the image" } # Set the GOPATH Write-Host -ForegroundColor Green "INFO: Updating the golang and path environment variables" $env:GOPATH="$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR" Write-Host -ForegroundColor Green "INFO: GOPATH=$env:GOPATH" # Set the path to have the version of go from the image at the front $env:PATH="$env:TEMP\go\bin;$env:PATH" # Set the GOROOT to be our copy of go from the image $env:GOROOT="$env:TEMP\go" Write-Host -ForegroundColor Green "INFO: $(go version)" # Work out the -H parameter for the daemon under test (DASHH_DUT) and client under test (DASHH_CUT) #$DASHH_DUT="npipe:////./pipe/$COMMITHASH" # Can't do remote named pipe #$ip = (resolve-dnsname $env:COMPUTERNAME -type A -NoHostsFile -LlmnrNetbiosOnly).IPAddress # Useful to tie down $DASHH_CUT="tcp://127.0.0.1`:2357" # Not a typo for 2375! $DASHH_DUT="tcp://0.0.0.0:2357" # Not a typo for 2375! # Arguments for the daemon under test $dutArgs=@() $dutArgs += "-H $DASHH_DUT" $dutArgs += "--data-root $env:TEMP\daemon" $dutArgs += "--pidfile $env:TEMP\docker.pid" # Save the PID file so we can nuke it if set $pidFile="$env:TEMP\docker.pid" # Arguments: Are we starting the daemon under test in debug mode? if (-not ("$env:DOCKER_DUT_DEBUG" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test in debug mode" $dutArgs += "-D" } # Arguments: Are we starting the daemon under test with Hyper-V containers as the default isolation? if (-not ("$env:DOCKER_DUT_HYPERV" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with Hyper-V containers as the default" $dutArgs += "--exec-opt isolation=hyperv" } # Arguments: Allow setting optional storage-driver options # example usage: DOCKER_STORAGE_OPTS="lcow.globalmode=false,lcow.kernel=kernel.efi" if (-not ("$env:DOCKER_STORAGE_OPTS" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with storage-driver options ${env:DOCKER_STORAGE_OPTS}" $env:DOCKER_STORAGE_OPTS.Split(",") | ForEach { $dutArgs += "--storage-opt $_" } } # Start the daemon under test, ensuring everything is redirected to folders under $TEMP. # Important - we launch the -$COMMITHASH version so that we can kill it without # killing the control daemon. Write-Host -ForegroundColor Green "INFO: Starting a daemon under test..." Write-Host -ForegroundColor Green "INFO: Args: $dutArgs" New-Item -ItemType Directory $env:TEMP\daemon -ErrorAction SilentlyContinue | Out-Null # Cannot fathom why, but always writes to stderr.... Start-Process "$env:TEMP\binary\dockerd-$COMMITHASH" ` -ArgumentList $dutArgs ` -RedirectStandardOutput "$env:TEMP\dut.out" ` -RedirectStandardError "$env:TEMP\dut.err" Write-Host -ForegroundColor Green "INFO: Process started successfully." $daemonStarted=1 # Start tailing the daemon under test if the command is installed if ($null -ne (Get-Command "tail" -ErrorAction SilentlyContinue)) { Write-Host -ForegroundColor green "INFO: Start tailing logs of the daemon under tests" $tail = Start-Process "tail" -ArgumentList "-f $env:TEMP\dut.out" -PassThru -ErrorAction SilentlyContinue } # Verify we can get the daemon under test to respond $tries=20 Write-Host -ForegroundColor Green "INFO: Waiting for the daemon under test to start..." while ($true) { $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version 2>&1 | Out-Null $ErrorActionPreference = "Stop" if ($LastExitCode -eq 0) { break } $tries-- if ($tries -le 0) { Throw "ERROR: Failed to get a response from the daemon under test" } Write-Host -NoNewline "." sleep 1 } Write-Host -ForegroundColor Green "INFO: Daemon under test started and replied!" # Provide the docker version of the daemon under test for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker info Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker images Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Default to windowsservercore for the base image used for the tests. The "docker" image # and the control daemon use microsoft/windowsservercore regardless. This is *JUST* for the tests. if ($null -eq $env:WINDOWS_BASE_IMAGE) { $env:WINDOWS_BASE_IMAGE="microsoft/windowsservercore" } if ($null -eq $env:WINDOWS_BASE_IMAGE_TAG) { $env:WINDOWS_BASE_IMAGE_TAG="latest" } # Lowercase and make sure it has a microsoft/ prefix $env:WINDOWS_BASE_IMAGE = $env:WINDOWS_BASE_IMAGE.ToLower() if (! $($env:WINDOWS_BASE_IMAGE -Split "/")[0] -match "microsoft") { Throw "ERROR: WINDOWS_BASE_IMAGE should start microsoft/ or mcr.microsoft.com/" } Write-Host -ForegroundColor Green "INFO: Base image for tests is $env:WINDOWS_BASE_IMAGE" $ErrorActionPreference = "SilentlyContinue" if ($((& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images --format "{{.Repository}}:{{.Tag}}" | Select-String "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("c:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar")) { Write-Host -ForegroundColor Green "INFO: Loading"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]".tar from disk into the daemon under test. This may take some time..." $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" load -i $("$readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar into daemon under test") } Write-Host -ForegroundColor Green "INFO: docker load of"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]" into daemon under test completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:tagname Write-Host -ForegroundColor Green $("INFO: Pulling "+$env:WINDOWS_BASE_IMAGE+":"+$env:WINDOWS_BASE_IMAGE_TAG+" from docker hub into daemon under test. This may take some time...") $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage in daemon under test") & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is already loaded in the daemon under test" } # Inspect the pulled or loaded image to get the version directly $ErrorActionPreference = "SilentlyContinue" $dutimgVersion = $(&"$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" inspect "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is '"+$dutimgVersion+"'") # Run the validation tests unless SKIP_VALIDATION_TESTS is defined. if ($null -eq $env:SKIP_VALIDATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running validation tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { hack\make.ps1 -DCO -GoFormat -PkgImports | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Validation tests failed" } Write-Host -ForegroundColor Green "INFO: Validation tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping validation tests" } # Run the unit tests inside a container unless SKIP_UNIT_TESTS is defined if ($null -eq $env:SKIP_UNIT_TESTS) { $ContainerNameForUnitTests = $COMMITHASH + "_UnitTests" Write-Host -ForegroundColor Cyan "INFO: Running unit tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command {docker run --name $ContainerNameForUnitTests -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -TestUnit | Out-Host }) $TestRunExitCode = $LastExitCode $ErrorActionPreference = "Stop" # Saving where jenkins will take a look at..... New-Item -Force -ItemType Directory bundles | Out-Null $unitTestsContPath="$ContainerNameForUnitTests`:c`:\gopath\src\github.com\docker\docker\bundles" $JunitExpectedContFilePath = "$unitTestsContPath\junit-report-unit-tests.xml" docker cp $JunitExpectedContFilePath "bundles" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the unit tests report ($JunitExpectedContFilePath) to bundles" } if (Test-Path "bundles\junit-report-unit-tests.xml") { Write-Host -ForegroundColor Magenta "INFO: Unit tests results(bundles\junit-report-unit-tests.xml) exist. pwd=$pwd" } else { Write-Host -ForegroundColor Magenta "ERROR: Unit tests results(bundles\junit-report-unit-tests.xml) do not exist. pwd=$pwd" } if (-not($TestRunExitCode -eq 0)) { Throw "ERROR: Unit tests failed" } Write-Host -ForegroundColor Green "INFO: Unit tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping unit tests" } # Add the Windows busybox image. Needed for WCOW integration tests if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Green "INFO: Building busybox" $ErrorActionPreference = "SilentlyContinue" $(& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" build -t busybox --build-arg WINDOWS_BASE_IMAGE --build-arg WINDOWS_BASE_IMAGE_TAG "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\contrib\busybox\" | Out-Host) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build busybox image" } Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Run the WCOW integration tests unless SKIP_INTEGRATION_TESTS is defined if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running integration tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" # Location of the daemon under test. $env:OrigDOCKER_HOST="$env:DOCKER_HOST" #https://blogs.technet.microsoft.com/heyscriptingguy/2011/09/20/solve-problems-with-external-command-lines-in-powershell/ is useful to see tokenising $jsonFilePath = "..\\bundles\\go-test-report-intcli-tests.json" $xmlFilePath = "..\\bundles\\junit-report-intcli-tests.xml" $c = "gotestsum --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " if ($null -ne $env:INTEGRATION_TEST_NAME) { # Makes is quicker for debugging to be able to run only a subset of the integration tests $c += "`"-test.run`" " $c += "`"$env:INTEGRATION_TEST_NAME`" " Write-Host -ForegroundColor Magenta "WARN: Only running integration tests matching $env:INTEGRATION_TEST_NAME" } $c += "`"-tags`" " + "`"autogen`" " $c += "`"-test.timeout`" " + "`"200m`" " if ($null -ne $env:INTEGRATION_IN_CONTAINER) { Write-Host -ForegroundColor Green "INFO: Integration tests being run inside a container" # Note we talk back through the containers gateway address # And the ridiculous lengths we have to go to get the default gateway address... (GetNetIPConfiguration doesn't work in nanoserver) # I just could not get the escaping to work in a single command, so output $c to a file and run that in the container instead... # Not the prettiest, but it works. $c | Out-File -Force "$env:TEMP\binary\runIntegrationCLI.ps1" $Duration= $(Measure-Command { & docker run ` --rm ` -e c=$c ` --workdir "c`:\gopath\src\github.com\docker\docker\integration-cli" ` -v "$env:TEMP\binary`:c:\target" ` docker ` "`$env`:PATH`='c`:\target;'+`$env:PATH`; `$env:DOCKER_HOST`='tcp`://'+(ipconfig | select -last 1).Substring(39)+'`:2357'; c:\target\runIntegrationCLI.ps1" | Out-Host } ) } else { $env:DOCKER_HOST=$DASHH_CUT $env:PATH="$env:TEMP\binary;$env:PATH;" # Force to use the test binaries, not the host ones. $env:GO111MODULE="off" Write-Host -ForegroundColor Green "INFO: DOCKER_HOST at $DASHH_CUT" $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Cyan "INFO: Integration API tests being run from the host:" $start=(Get-Date); Invoke-Expression ".\hack\make.ps1 -TestIntegration"; $Duration=New-Timespan -Start $start -End (Get-Date) $IntTestsRunResult = $LastExitCode $ErrorActionPreference = "Stop" if (-not($IntTestsRunResult -eq 0)) { Throw "ERROR: Integration API tests failed at $(Get-Date). Duration`:$Duration" } $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Green "INFO: Integration CLI tests being run from the host:" Write-Host -ForegroundColor Green "INFO: $c" Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\integration-cli" # Explicit to not use measure-command otherwise don't get output as it goes $start=(Get-Date); Invoke-Expression $c; $Duration=New-Timespan -Start $start -End (Get-Date) } $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Integration CLI tests failed at $(Get-Date). Duration`:$Duration" } Write-Host -ForegroundColor Green "INFO: Integration tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping integration tests" } # Docker info now to get counts (after or if jjh/containercounts is merged) if ($daemonStarted -eq 1) { Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test at end of run" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Stop the daemon under test if (Test-Path "$env:TEMP\docker.pid") { $p=Get-Content "$env:TEMP\docker.pid" -raw if (($null -ne $p) -and ($daemonStarted -eq 1)) { Write-Host -ForegroundColor green "INFO: Stopping daemon under test" taskkill -f -t -pid $p #sleep 5 } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } # Stop the tail process (if started) if ($null -ne $tail) { Write-Host -ForegroundColor green "INFO: Stop tailing logs of the daemon under tests" Stop-Process -InputObject $tail -Force } Write-Host -ForegroundColor Green "INFO: executeCI.ps1 Completed successfully at $(Get-Date)." } Catch [Exception] { $FinallyColour="Red" Write-Host -ForegroundColor Red ("`r`n`r`nERROR: Failed '$_' at $(Get-Date)") Write-Host -ForegroundColor Red ($_.InvocationInfo.PositionMessage) Write-Host "`n`n" # Exit to ensure Jenkins captures it. Don't do this in the ISE or interactive Powershell - they will catch the Throw onwards. if ( ([bool]([Environment]::GetCommandLineArgs() -Like '*-NonInteractive*')) -and ` ([bool]([Environment]::GetCommandLineArgs() -NotLike "*Powershell_ISE.exe*"))) { exit 1 } Throw $_ } Finally { $ErrorActionPreference="SilentlyContinue" $global:ProgressPreference=$origProgressPreference Write-Host -ForegroundColor Green "INFO: Tidying up at end of run" # Restore the path if ($null -ne $origPath) { $env:PATH=$origPath } # Restore the DOCKER_HOST if ($null -ne $origDOCKER_HOST) { $env:DOCKER_HOST=$origDOCKER_HOST } # Restore the GOROOT and GOPATH variables if ($null -ne $origGOROOT) { $env:GOROOT=$origGOROOT } if ($null -ne $origGOPATH) { $env:GOPATH=$origGOPATH } # Dump the daemon log. This will include any possible panic stack in the .err. if (($daemonStarted -eq 1) -and ($(Get-Item "$env:TEMP\dut.err").Length -gt 0)) { Write-Host -ForegroundColor Cyan "----------- DAEMON LOG ------------" Get-Content "$env:TEMP\dut.err" -ErrorAction SilentlyContinue | Write-Host -ForegroundColor Cyan Write-Host -ForegroundColor Cyan "----------- END DAEMON LOG --------" } # Save the daemon under test log if ($daemonStarted -eq 1) { Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.out) to bundles\CIDUT.out" Copy-Item "$env:TEMP\dut.out" "bundles\CIDUT.out" -Force -ErrorAction SilentlyContinue Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.err) to bundles\CIDUT.err" Copy-Item "$env:TEMP\dut.err" "bundles\CIDUT.err" -Force -ErrorAction SilentlyContinue } Set-Location "$env:SOURCES_DRIVE\$env:SOURCES_SUBDIR" -ErrorAction SilentlyContinue Nuke-Everything # Restore the TEMP path if ($null -ne $TEMPORIG) { $env:TEMP="$TEMPORIG" } $Dur=New-TimeSpan -Start $StartTime -End $(Get-Date) Write-Host -ForegroundColor $FinallyColour "`nINFO: executeCI.ps1 exiting at $(date). Duration $dur`n" }
# WARNING: When editing this file, consider submitting a PR to # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/executeCI.ps1, and make sure that # https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1 isn't broken. # Validate using a test context in Jenkins, then copy/paste into Jenkins production. # # Jenkins CI scripts for Windows to Windows CI (Powershell Version) # By John Howard (@jhowardmsft) January 2016 - bash version; July 2016 Ported to PowerShell $ErrorActionPreference = 'Stop' $StartTime=Get-Date Write-Host -ForegroundColor Red "DEBUG: print all environment variables to check how Jenkins runs this script" $allArgs = [Environment]::GetCommandLineArgs() Write-Host -ForegroundColor Red $allArgs Write-Host -ForegroundColor Red "----------------------------------------------------------------------------" # ------------------------------------------------------------------------------------------- # When executed, we rely on four variables being set in the environment: # # [The reason for being environment variables rather than parameters is historical. No reason # why it couldn't be updated.] # # SOURCES_DRIVE is the drive on which the sources being tested are cloned from. # This should be a straight drive letter, no platform semantics. # For example 'c' # # SOURCES_SUBDIR is the top level directory under SOURCES_DRIVE where the # sources are cloned to. There are no platform semantics in this # as it does not include slashes. # For example 'gopath' # # Based on the above examples, it would be expected that Jenkins # would clone the sources being tested to # SOURCES_DRIVE\SOURCES_SUBDIR\src\github.com\docker\docker, or # c:\gopath\src\github.com\docker\docker # # TESTRUN_DRIVE is the drive where we build the binary on and redirect everything # to for the daemon under test. On an Azure D2 type host which has # an SSD temporary storage D: drive, this is ideal for performance. # For example 'd' # # TESTRUN_SUBDIR is the top level directory under TESTRUN_DRIVE where we redirect # everything to for the daemon under test. For example 'CI'. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\CI-<CommitID> or # d:\CI\CI-<CommitID> # # Optional environment variables help in CI: # # BUILD_NUMBER + BRANCH_NAME are optional variables to be added to the directory below TESTRUN_SUBDIR # to have individual folder per CI build. If some files couldn't be # cleaned up and we want to re-run the build in CI. # Hence, the daemon under test is run under # TESTRUN_DRIVE\TESTRUN_SUBDIR\PR-<PR-Number>\<BuildNumber> or # d:\CI\PR-<PR-Number>\<BuildNumber> # # In addition, the following variables can control the run configuration: # # DOCKER_DUT_DEBUG if defined starts the daemon under test in debug mode. # # DOCKER_STORAGE_OPTS comma-separated list of optional storage driver options for the daemon under test # examples: # DOCKER_STORAGE_OPTS="size=40G" # # SKIP_VALIDATION_TESTS if defined skips the validation tests # # SKIP_UNIT_TESTS if defined skips the unit tests # # SKIP_INTEGRATION_TESTS if defined skips the integration tests # # SKIP_COPY_GO if defined skips copy the go installer from the image # # DOCKER_DUT_HYPERV if default daemon under test default isolation is hyperv # # INTEGRATION_TEST_NAME to only run partial tests eg "TestInfo*" will only run # any tests starting "TestInfo" # # SKIP_BINARY_BUILD if defined skips building the binary # # SKIP_ZAP_DUT if defined doesn't zap the daemon under test directory # # SKIP_IMAGE_BUILD if defined doesn't build the 'docker' image # # INTEGRATION_IN_CONTAINER if defined, runs the integration tests from inside a container. # As of July 2016, there are known issues with this. # # SKIP_ALL_CLEANUP if defined, skips any cleanup at the start or end of the run # # WINDOWS_BASE_IMAGE if defined, uses that as the base image. Note that the # docker integration tests are also coded to use the same # environment variable, and if no set, defaults to microsoft/windowsservercore # # WINDOWS_BASE_IMAGE_TAG if defined, uses that as the tag name for the base image. # if no set, defaults to latest # # ------------------------------------------------------------------------------------------- # # Jenkins Integration. Add a Windows Powershell build step as follows: # # Write-Host -ForegroundColor green "INFO: Jenkins build step starting" # $CISCRIPT_DEFAULT_LOCATION = "https://raw.githubusercontent.com/moby/moby/master/hack/ci/windows.ps1" # $CISCRIPT_LOCAL_LOCATION = "$env:TEMP\executeCI.ps1" # Write-Host -ForegroundColor green "INFO: Removing cached execution script" # Remove-Item $CISCRIPT_LOCAL_LOCATION -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # $wc = New-Object net.webclient # try { # Write-Host -ForegroundColor green "INFO: Downloading latest execution script..." # $wc.Downloadfile($CISCRIPT_DEFAULT_LOCATION, $CISCRIPT_LOCAL_LOCATION) # } # catch [System.Net.WebException] # { # Throw ("Failed to download: $_") # } # & $CISCRIPT_LOCAL_LOCATION # ------------------------------------------------------------------------------------------- $SCRIPT_VER="05-Feb-2019 09:03 PDT" $FinallyColour="Cyan" #$env:DOCKER_DUT_DEBUG="yes" # Comment out to not be in debug mode #$env:SKIP_UNIT_TESTS="yes" #$env:SKIP_VALIDATION_TESTS="yes" #$env:SKIP_ZAP_DUT="" #$env:SKIP_BINARY_BUILD="yes" #$env:INTEGRATION_TEST_NAME="" #$env:SKIP_IMAGE_BUILD="yes" #$env:SKIP_ALL_CLEANUP="yes" #$env:INTEGRATION_IN_CONTAINER="yes" #$env:WINDOWS_BASE_IMAGE="" #$env:SKIP_COPY_GO="yes" #$env:INTEGRATION_TESTFLAGS="-test.v" Function Nuke-Everything { $ErrorActionPreference = 'SilentlyContinue' try { if ($null -eq $env:SKIP_ALL_CLEANUP) { Write-Host -ForegroundColor green "INFO: Nuke-Everything..." $containerCount = ($(docker ps -aq | Measure-Object -line).Lines) if (-not $LastExitCode -eq 0) { Throw "ERROR: Failed to get container count from control daemon while nuking" } Write-Host -ForegroundColor green "INFO: Container count on control daemon to delete is $containerCount" if ($(docker ps -aq | Measure-Object -line).Lines -gt 0) { docker rm -f $(docker ps -aq) } $allImages = $(docker images --format "{{.Repository}}#{{.ID}}") $toRemove = ($allImages | Select-String -NotMatch "servercore","nanoserver","docker","busybox") $imageCount = ($toRemove | Measure-Object -line).Lines if ($imageCount -gt 0) { Write-Host -Foregroundcolor green "INFO: Non-base image count on control daemon to delete is $imageCount" docker rmi -f ($toRemove | Foreach-Object { $_.ToString().Split("#")[1] }) } } else { Write-Host -ForegroundColor Magenta "WARN: Skipping cleanup of images and containers" } # Kill any spurious daemons. The '-' is IMPORTANT otherwise will kill the control daemon! $pids=$(get-process | where-object {$_.ProcessName -like 'dockerd-*'}).id foreach ($p in $pids) { Write-Host "INFO: Killing daemon with PID $p" Stop-Process -Id $p -Force -ErrorAction SilentlyContinue } if ($null -ne $pidFile) { Write-Host "INFO: Tidying pidfile $pidfile" if (Test-Path $pidFile) { $p=Get-Content $pidFile -raw if ($null -ne $p){ Write-Host -ForegroundColor green "INFO: Stopping possible daemon pid $p" taskkill -f -t -pid $p } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } } Stop-Process -name "cc1" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "link" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "compile" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "ld" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "go" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "git-remote-https" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "integration-cli.test" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null Stop-Process -name "tail" -Force -ErrorAction SilentlyContinue 2>&1 | Out-Null # Detach any VHDs gwmi msvm_mountedstorageimage -namespace root/virtualization/v2 -ErrorAction SilentlyContinue | foreach-Object {$_.DetachVirtualHardDisk() } # Stop any compute processes Get-ComputeProcess | Stop-ComputeProcess -Force # Delete the directory using our dangerous utility unless told not to if (Test-Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR") { if (($null -ne $env:SKIP_ZAP_DUT) -or ($null -eq $env:SKIP_ALL_CLEANUP)) { Write-Host -ForegroundColor Green "INFO: Nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" docker-ci-zap "-folder=$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } else { Write-Host -ForegroundColor Magenta "WARN: Skip nuking $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" } } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 Production Server workaround - Psched $reg = "HKLM:\System\CurrentControlSet\Services\Psched\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under Psched\Parameters" Write-Warning "Cleaning Psched..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } # TODO: This should be able to be removed in August 2017 update. Only needed for RS1 $reg = "HKLM:\System\CurrentControlSet\Services\WFPLWFS\Parameters\NdisAdapters" $count=(Get-ChildItem $reg | Measure-Object).Count if ($count -gt 0) { Write-Warning "There are $count NdisAdapters leaked under WFPLWFS\Parameters" Write-Warning "Cleaning WFPLWFS..." Get-ChildItem $reg | Remove-Item -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } } catch { # Don't throw any errors onwards Throw $_ } } Try { Write-Host -ForegroundColor Cyan "`nINFO: executeCI.ps1 starting at $(date)`n" Write-Host -ForegroundColor Green "INFO: Script version $SCRIPT_VER" Set-PSDebug -Trace 0 # 1 to turn on $origPath="$env:PATH" # so we can restore it at the end $origDOCKER_HOST="$DOCKER_HOST" # So we can restore it at the end $origGOROOT="$env:GOROOT" # So we can restore it at the end $origGOPATH="$env:GOPATH" # So we can restore it at the end # Turn off progress bars $origProgressPreference=$global:ProgressPreference $global:ProgressPreference='SilentlyContinue' # Git version Write-Host -ForegroundColor Green "INFO: Running $(git version)" # OS Version $bl=(Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name BuildLabEx).BuildLabEx $a=$bl.ToString().Split(".") $Branch=$a[3] $WindowsBuild=$a[0]+"."+$a[1]+"."+$a[4] Write-Host -ForegroundColor green "INFO: Branch:$Branch Build:$WindowsBuild" # List the environment variables Write-Host -ForegroundColor green "INFO: Environment variables:" Get-ChildItem Env: | Out-String # PR if (-not ($null -eq $env:PR)) { Write-Output "INFO: PR#$env:PR (https://github.com/docker/docker/pull/$env:PR)" } # Make sure docker is installed if ($null -eq (Get-Command "docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker is not installed or not found on path" } # Make sure docker-ci-zap is installed if ($null -eq (Get-Command "docker-ci-zap" -ErrorAction SilentlyContinue)) { Throw "ERROR: docker-ci-zap is not installed or not found on path" } # Make sure Windows Defender is disabled $defender = $false Try { $status = Get-MpComputerStatus if ($status) { if ($status.RealTimeProtectionEnabled) { $defender = $true } } } Catch {} if ($defender) { Write-Host -ForegroundColor Magenta "WARN: Windows Defender real time protection is enabled, which may cause some integration tests to fail" } # Make sure SOURCES_DRIVE is set if ($null -eq $env:SOURCES_DRIVE) { Throw "ERROR: Environment variable SOURCES_DRIVE is not set" } # Make sure TESTRUN_DRIVE is set if ($null -eq $env:TESTRUN_DRIVE) { Throw "ERROR: Environment variable TESTRUN_DRIVE is not set" } # Make sure SOURCES_SUBDIR is set if ($null -eq $env:SOURCES_SUBDIR) { Throw "ERROR: Environment variable SOURCES_SUBDIR is not set" } # Make sure TESTRUN_SUBDIR is set if ($null -eq $env:TESTRUN_SUBDIR) { Throw "ERROR: Environment variable TESTRUN_SUBDIR is not set" } # SOURCES_DRIVE\SOURCES_SUBDIR must be a directory and exist if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR")) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR must be an existing directory" } # Create the TESTRUN_DRIVE\TESTRUN_SUBDIR if it does not already exist New-Item -ItemType Directory -Force -Path "$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR" -ErrorAction SilentlyContinue | Out-Null Write-Host -ForegroundColor Green "INFO: Sources under $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\..." Write-Host -ForegroundColor Green "INFO: Test run under $env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\..." # Check the intended source location is a directory if (-not (Test-Path -PathType Container "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" -ErrorAction SilentlyContinue)) { Throw "ERROR: $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker is not a directory!" } # Make sure we start at the root of the sources Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Running in $(Get-Location)" # Make sure we are in repo if (-not (Test-Path -PathType Leaf -Path ".\Dockerfile.windows")) { Throw "$(Get-Location) does not contain Dockerfile.windows!" } Write-Host -ForegroundColor Green "INFO: docker/docker repository was found" # Make sure microsoft/windowsservercore:latest image is installed in the control daemon. On public CI machines, windowsservercore.tar and nanoserver.tar # are pre-baked and tagged appropriately in the c:\baseimages directory, and can be directly loaded. # Note - this script will only work on 10B (Oct 2016) or later machines! Not 9D or previous due to image tagging assumptions. # # On machines not on Microsoft corpnet, or those which have not been pre-baked, we have to docker pull the image in which case it will # will come in directly as microsoft/windowsservercore:latest. The ultimate goal of all this code is to ensure that whatever, # we have microsoft/windowsservercore:latest # # Note we cannot use (as at Oct 2016) nanoserver as the control daemons base image, even if nanoserver is used in the tests themselves. $ErrorActionPreference = "SilentlyContinue" $ControlDaemonBaseImage="windowsservercore" $readBaseFrom="c" if ($((docker images --format "{{.Repository}}:{{.Tag}}" | Select-String $("microsoft/"+$ControlDaemonBaseImage+":latest") | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("$env:SOURCES_DRIVE`:\baseimages\"+$ControlDaemonBaseImage+".tar")) { # An optimization for CI servers to copy it to the D: drive which is an SSD. if ($env:SOURCES_DRIVE -ne $env:TESTRUN_DRIVE) { $readBaseFrom=$env:TESTRUN_DRIVE if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages")) { New-Item "$env:TESTRUN_DRIVE`:\baseimages" -type directory | Out-Null } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\windowsservercore.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\windowsservercore.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } if (!(Test-Path "$env:TESTRUN_DRIVE`:\baseimages\nanoserver.tar")) { if (Test-Path "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar") { Write-Host -ForegroundColor Green "INFO: Optimisation - copying $env:SOURCES_DRIVE`:\baseimages\nanoserver.tar to $env:TESTRUN_DRIVE`:\baseimages" Copy-Item "$env:SOURCES_DRIVE`:\baseimages\nanoserver.tar" "$env:TESTRUN_DRIVE`:\baseimages" } } $readBaseFrom=$env:TESTRUN_DRIVE } Write-Host -ForegroundColor Green "INFO: Loading"$ControlDaemonBaseImage".tar from disk. This may take some time..." $ErrorActionPreference = "SilentlyContinue" docker load -i $("$readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$ControlDaemonBaseImage+".tar") } Write-Host -ForegroundColor Green "INFO: docker load of"$ControlDaemonBaseImage" completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:latest Write-Host -ForegroundColor Green $("INFO: Pulling $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG from docker hub. This may take some time...") $ErrorActionPreference = "SilentlyContinue" docker pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage") docker tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image"$("microsoft/"+$ControlDaemonBaseImage+":latest")"is already loaded in the control daemon" } # Inspect the pulled image to get the version directly $ErrorActionPreference = "SilentlyContinue" $imgVersion = $(docker inspect $("microsoft/"+$ControlDaemonBaseImage) --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of microsoft/"+$ControlDaemonBaseImage+":latest is '"+$imgVersion+"'") # Provide the docker version for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker version $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Write-Host Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host -ForegroundColor Green " Failed to get a response from the control daemon. It may be down." Write-Host -ForegroundColor Green " Try re-running this CI job, or ask on #docker-maintainers on docker slack" Write-Host -ForegroundColor Green " to see if the daemon is running. Also check the service configuration." Write-Host -ForegroundColor Green " DOCKER_HOST is set to $DOCKER_HOST." Write-Host -ForegroundColor Green "---------------------------------------------------------------------------" Write-Host Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Same as above, but docker info Write-Host -ForegroundColor Green "INFO: Docker info of control daemon" Write-Host $ErrorActionPreference = "SilentlyContinue" docker info $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: The control daemon does not appear to be running." } Write-Host # Get the commit has and verify we have something $ErrorActionPreference = "SilentlyContinue" $COMMITHASH=$(git rev-parse --short HEAD) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" } Write-Host -ForegroundColor Green "INFO: Commit hash is $COMMITHASH" # Nuke everything and go back to our sources after Nuke-Everything cd "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" # Redirect to a temporary location. $TEMPORIG=$env:TEMP if ($null -eq $env:BUILD_NUMBER) { $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\CI-$COMMITHASH" } else { # individual temporary location per CI build that better matches the BUILD_URL $env:TEMP="$env:TESTRUN_DRIVE`:\$env:TESTRUN_SUBDIR\$env:BRANCH_NAME\$env:BUILD_NUMBER" } $env:LOCALAPPDATA="$env:TEMP\localappdata" $errorActionPreference='Stop' New-Item -ItemType Directory "$env:TEMP" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\userprofile" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\testresults\unittests" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\localappdata" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\binary" -ErrorAction SilentlyContinue | Out-Null New-Item -ItemType Directory "$env:TEMP\installer" -ErrorAction SilentlyContinue | Out-Null if ($null -eq $env:SKIP_COPY_GO) { # Wipe the previous version of GO - we're going to get it out of the image if (Test-Path "$env:TEMP\go") { Remove-Item "$env:TEMP\go" -Recurse -Force -ErrorAction SilentlyContinue | Out-Null } New-Item -ItemType Directory "$env:TEMP\go" -ErrorAction SilentlyContinue | Out-Null } Write-Host -ForegroundColor Green "INFO: Location for testing is $env:TEMP" # CI Integrity check - ensure Dockerfile.windows and Dockerfile go versions match $goVersionDockerfileWindows=(Select-String -Path ".\Dockerfile.windows" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value $goVersionDockerfile=(Select-String -Path ".\Dockerfile" -Pattern "^ARG[\s]+GO_VERSION=(.*)$").Matches.groups[1].Value if ($null -eq $goVersionDockerfile) { Throw "ERROR: Failed to extract golang version from Dockerfile" } Write-Host -ForegroundColor Green "INFO: Validating GOLang consistency in Dockerfile.windows..." if (-not ($goVersionDockerfile -eq $goVersionDockerfileWindows)) { Throw "ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. $goVersionDockerfile $goVersionDockerfileWindows" } # Build the image if ($null -eq $env:SKIP_IMAGE_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the image from Dockerfile.windows at $(Get-Date)..." Write-Host $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { docker build --build-arg=GO_VERSION -t docker -f Dockerfile.windows . | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build image from Dockerfile.windows" } Write-Host -ForegroundColor Green "INFO: Image build ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the docker image" } # Following at the moment must be docker\docker as it's dictated by dockerfile.Windows $contPath="$COMMITHASH`:c`:\gopath\src\github.com\docker\docker\bundles" # After https://github.com/docker/docker/pull/30290, .git was added to .dockerignore. Therefore # we have to calculate unsupported outside of the container, and pass the commit ID in through # an environment variable for the binary build $CommitUnsupported="" if ($(git status --porcelain --untracked-files=no).Length -ne 0) { $CommitUnsupported="-unsupported" } # Build the binary in a container unless asked to skip it. if ($null -eq $env:SKIP_BINARY_BUILD) { Write-Host -ForegroundColor Cyan "`n`nINFO: Building the test binaries at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" docker rm -f $COMMITHASH 2>&1 | Out-Null if ($CommitUnsupported -ne "") { Write-Host "" Write-Warning "This version is unsupported because there are uncommitted file(s)." Write-Warning "Either commit these changes, or add them to .gitignore." git status --porcelain --untracked-files=no | Write-Warning Write-Host "" } $Duration=$(Measure-Command {docker run --name $COMMITHASH -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -Daemon -Client | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build binary" } Write-Host -ForegroundColor Green "INFO: Binaries build ended at $(Get-Date). Duration`:$Duration" # Copy the binaries and the generated version_autogen.go out of the container $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\docker.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the client binary (docker.exe) to $env:TEMP\binary" } docker cp "$contPath\dockerd.exe" $env:TEMP\binary\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the daemon binary (dockerd.exe) to $env:TEMP\binary" } docker cp "$COMMITHASH`:c`:\gopath\bin\gotestsum.exe" $env:TEMP\binary\ if (-not (Test-Path "$env:TEMP\binary\gotestsum.exe")) { Throw "ERROR: gotestsum.exe not found...." ` } $ErrorActionPreference = "Stop" # Copy the built dockerd.exe to dockerd-$COMMITHASH.exe so that easily spotted in task manager. Write-Host -ForegroundColor Green "INFO: Copying the built daemon binary to $env:TEMP\binary\dockerd-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\dockerd.exe $env:TEMP\binary\dockerd-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue # Copy the built docker.exe to docker-$COMMITHASH.exe Write-Host -ForegroundColor Green "INFO: Copying the built client binary to $env:TEMP\binary\docker-$COMMITHASH.exe..." Copy-Item $env:TEMP\binary\docker.exe $env:TEMP\binary\docker-$COMMITHASH.exe -Force -ErrorAction SilentlyContinue } else { Write-Host -ForegroundColor Magenta "WARN: Skipping building the binaries" } Write-Host -ForegroundColor Green "INFO: Copying dockerversion from the container..." $ErrorActionPreference = "SilentlyContinue" docker cp "$contPath\..\dockerversion\version_autogen.go" "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the generated version_autogen.go to $env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\dockerversion" } $ErrorActionPreference = "Stop" # Grab the golang installer out of the built image. That way, we know we are consistent once extracted and paths set, # so there's no need to re-deploy on account of an upgrade to the version of GO being used in docker. if ($null -eq $env:SKIP_COPY_GO) { Write-Host -ForegroundColor Green "INFO: Copying the golang package from the container to $env:TEMP\installer\go.zip..." docker cp "$COMMITHASH`:c`:\go.zip" $env:TEMP\installer\ if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the golang installer 'go.zip' from container:c:\go.zip to $env:TEMP\installer" } $ErrorActionPreference = "Stop" # Extract the golang installer Write-Host -ForegroundColor Green "INFO: Extracting go.zip to $env:TEMP\go" $Duration=$(Measure-Command { Expand-Archive $env:TEMP\installer\go.zip $env:TEMP -Force | Out-Null}) Write-Host -ForegroundColor Green "INFO: Extraction ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping copying and extracting golang from the image" } # Set the GOPATH Write-Host -ForegroundColor Green "INFO: Updating the golang and path environment variables" $env:GOPATH="$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR" Write-Host -ForegroundColor Green "INFO: GOPATH=$env:GOPATH" # Set the path to have the version of go from the image at the front $env:PATH="$env:TEMP\go\bin;$env:PATH" # Set the GOROOT to be our copy of go from the image $env:GOROOT="$env:TEMP\go" Write-Host -ForegroundColor Green "INFO: $(go version)" # Work out the -H parameter for the daemon under test (DASHH_DUT) and client under test (DASHH_CUT) #$DASHH_DUT="npipe:////./pipe/$COMMITHASH" # Can't do remote named pipe #$ip = (resolve-dnsname $env:COMPUTERNAME -type A -NoHostsFile -LlmnrNetbiosOnly).IPAddress # Useful to tie down $DASHH_CUT="tcp://127.0.0.1`:2357" # Not a typo for 2375! $DASHH_DUT="tcp://0.0.0.0:2357" # Not a typo for 2375! # Arguments for the daemon under test $dutArgs=@() $dutArgs += "-H $DASHH_DUT" $dutArgs += "--data-root $env:TEMP\daemon" $dutArgs += "--pidfile $env:TEMP\docker.pid" # Save the PID file so we can nuke it if set $pidFile="$env:TEMP\docker.pid" # Arguments: Are we starting the daemon under test in debug mode? if (-not ("$env:DOCKER_DUT_DEBUG" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test in debug mode" $dutArgs += "-D" } # Arguments: Are we starting the daemon under test with Hyper-V containers as the default isolation? if (-not ("$env:DOCKER_DUT_HYPERV" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with Hyper-V containers as the default" $dutArgs += "--exec-opt isolation=hyperv" } # Arguments: Allow setting optional storage-driver options # example usage: DDOCKER_STORAGE_OPTS="size=40G" if (-not ("$env:DOCKER_STORAGE_OPTS" -eq "")) { Write-Host -ForegroundColor Green "INFO: Running the daemon under test with storage-driver options ${env:DOCKER_STORAGE_OPTS}" $env:DOCKER_STORAGE_OPTS.Split(",") | ForEach-Object { $dutArgs += "--storage-opt $_" } } # Start the daemon under test, ensuring everything is redirected to folders under $TEMP. # Important - we launch the -$COMMITHASH version so that we can kill it without # killing the control daemon. Write-Host -ForegroundColor Green "INFO: Starting a daemon under test..." Write-Host -ForegroundColor Green "INFO: Args: $dutArgs" New-Item -ItemType Directory $env:TEMP\daemon -ErrorAction SilentlyContinue | Out-Null # Cannot fathom why, but always writes to stderr.... Start-Process "$env:TEMP\binary\dockerd-$COMMITHASH" ` -ArgumentList $dutArgs ` -RedirectStandardOutput "$env:TEMP\dut.out" ` -RedirectStandardError "$env:TEMP\dut.err" Write-Host -ForegroundColor Green "INFO: Process started successfully." $daemonStarted=1 # Start tailing the daemon under test if the command is installed if ($null -ne (Get-Command "tail" -ErrorAction SilentlyContinue)) { Write-Host -ForegroundColor green "INFO: Start tailing logs of the daemon under tests" $tail = Start-Process "tail" -ArgumentList "-f $env:TEMP\dut.out" -PassThru -ErrorAction SilentlyContinue } # Verify we can get the daemon under test to respond $tries=20 Write-Host -ForegroundColor Green "INFO: Waiting for the daemon under test to start..." while ($true) { $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version 2>&1 | Out-Null $ErrorActionPreference = "Stop" if ($LastExitCode -eq 0) { break } $tries-- if ($tries -le 0) { Throw "ERROR: Failed to get a response from the daemon under test" } Write-Host -NoNewline "." sleep 1 } Write-Host -ForegroundColor Green "INFO: Daemon under test started and replied!" # Provide the docker version of the daemon under test for debugging purposes. Write-Host -ForegroundColor Green "INFO: Docker version of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" version $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker info Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Same as above but docker images Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host # Default to windowsservercore for the base image used for the tests. The "docker" image # and the control daemon use microsoft/windowsservercore regardless. This is *JUST* for the tests. if ($null -eq $env:WINDOWS_BASE_IMAGE) { $env:WINDOWS_BASE_IMAGE="microsoft/windowsservercore" } if ($null -eq $env:WINDOWS_BASE_IMAGE_TAG) { $env:WINDOWS_BASE_IMAGE_TAG="latest" } # Lowercase and make sure it has a microsoft/ prefix $env:WINDOWS_BASE_IMAGE = $env:WINDOWS_BASE_IMAGE.ToLower() if (! $($env:WINDOWS_BASE_IMAGE -Split "/")[0] -match "microsoft") { Throw "ERROR: WINDOWS_BASE_IMAGE should start microsoft/ or mcr.microsoft.com/" } Write-Host -ForegroundColor Green "INFO: Base image for tests is $env:WINDOWS_BASE_IMAGE" $ErrorActionPreference = "SilentlyContinue" if ($((& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images --format "{{.Repository}}:{{.Tag}}" | Select-String "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" | Measure-Object -Line).Lines) -eq 0) { # Try the internal azure CI image version or Microsoft internal corpnet where the base image is already pre-prepared on the disk, # either through Invoke-DockerCI or, in the case of Azure CI servers, baked into the VHD at the same location. if (Test-Path $("c:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar")) { Write-Host -ForegroundColor Green "INFO: Loading"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]".tar from disk into the daemon under test. This may take some time..." $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" load -i $("$readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar") $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to load $readBaseFrom`:\baseimages\"+$($env:WINDOWS_BASE_IMAGE -Split "/")[1]+".tar into daemon under test") } Write-Host -ForegroundColor Green "INFO: docker load of"$($env:WINDOWS_BASE_IMAGE -Split "/")[1]" into daemon under test completed successfully" } else { # We need to docker pull it instead. It will come in directly as microsoft/imagename:tagname Write-Host -ForegroundColor Green $("INFO: Pulling "+$env:WINDOWS_BASE_IMAGE+":"+$env:WINDOWS_BASE_IMAGE_TAG+" from docker hub into daemon under test. This may take some time...") $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" pull "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" $ErrorActionPreference = "Stop" if (-not $LastExitCode -eq 0) { Throw $("ERROR: Failed to docker pull $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test.") } Write-Host -ForegroundColor Green $("INFO: docker pull of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG into daemon under test completed successfully") Write-Host -ForegroundColor Green $("INFO: Tagging $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG as microsoft/$ControlDaemonBaseImage in daemon under test") & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" tag "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" microsoft/$ControlDaemonBaseImage } } else { Write-Host -ForegroundColor Green "INFO: Image $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is already loaded in the daemon under test" } # Inspect the pulled or loaded image to get the version directly $ErrorActionPreference = "SilentlyContinue" $dutimgVersion = $(&"$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" inspect "$($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG" --format "{{.OsVersion}}") $ErrorActionPreference = "Stop" Write-Host -ForegroundColor Green $("INFO: Version of $($env:WINDOWS_BASE_IMAGE):$env:WINDOWS_BASE_IMAGE_TAG is '"+$dutimgVersion+"'") # Run the validation tests unless SKIP_VALIDATION_TESTS is defined. if ($null -eq $env:SKIP_VALIDATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running validation tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command { hack\make.ps1 -DCO -GoFormat -PkgImports | Out-Host }) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Validation tests failed" } Write-Host -ForegroundColor Green "INFO: Validation tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping validation tests" } # Run the unit tests inside a container unless SKIP_UNIT_TESTS is defined if ($null -eq $env:SKIP_UNIT_TESTS) { $ContainerNameForUnitTests = $COMMITHASH + "_UnitTests" Write-Host -ForegroundColor Cyan "INFO: Running unit tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" $Duration=$(Measure-Command {docker run --name $ContainerNameForUnitTests -e DOCKER_GITCOMMIT=$COMMITHASH$CommitUnsupported docker hack\make.ps1 -TestUnit | Out-Host }) $TestRunExitCode = $LastExitCode $ErrorActionPreference = "Stop" # Saving where jenkins will take a look at..... New-Item -Force -ItemType Directory bundles | Out-Null $unitTestsContPath="$ContainerNameForUnitTests`:c`:\gopath\src\github.com\docker\docker\bundles" $JunitExpectedContFilePath = "$unitTestsContPath\junit-report-unit-tests.xml" docker cp $JunitExpectedContFilePath "bundles" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to docker cp the unit tests report ($JunitExpectedContFilePath) to bundles" } if (Test-Path "bundles\junit-report-unit-tests.xml") { Write-Host -ForegroundColor Magenta "INFO: Unit tests results(bundles\junit-report-unit-tests.xml) exist. pwd=$pwd" } else { Write-Host -ForegroundColor Magenta "ERROR: Unit tests results(bundles\junit-report-unit-tests.xml) do not exist. pwd=$pwd" } if (-not($TestRunExitCode -eq 0)) { Throw "ERROR: Unit tests failed" } Write-Host -ForegroundColor Green "INFO: Unit tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping unit tests" } # Add the Windows busybox image. Needed for WCOW integration tests if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Green "INFO: Building busybox" $ErrorActionPreference = "SilentlyContinue" $(& "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" build -t busybox --build-arg WINDOWS_BASE_IMAGE --build-arg WINDOWS_BASE_IMAGE_TAG "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\contrib\busybox\" | Out-Host) $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Failed to build busybox image" } Write-Host -ForegroundColor Green "INFO: Docker images of the daemon under test" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" images $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Run the WCOW integration tests unless SKIP_INTEGRATION_TESTS is defined if ($null -eq $env:SKIP_INTEGRATION_TESTS) { Write-Host -ForegroundColor Cyan "INFO: Running integration tests at $(Get-Date)..." $ErrorActionPreference = "SilentlyContinue" # Location of the daemon under test. $env:OrigDOCKER_HOST="$env:DOCKER_HOST" #https://blogs.technet.microsoft.com/heyscriptingguy/2011/09/20/solve-problems-with-external-command-lines-in-powershell/ is useful to see tokenising $jsonFilePath = "..\\bundles\\go-test-report-intcli-tests.json" $xmlFilePath = "..\\bundles\\junit-report-intcli-tests.xml" $c = "gotestsum --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " if ($null -ne $env:INTEGRATION_TEST_NAME) { # Makes is quicker for debugging to be able to run only a subset of the integration tests $c += "`"-test.run`" " $c += "`"$env:INTEGRATION_TEST_NAME`" " Write-Host -ForegroundColor Magenta "WARN: Only running integration tests matching $env:INTEGRATION_TEST_NAME" } $c += "`"-tags`" " + "`"autogen`" " $c += "`"-test.timeout`" " + "`"200m`" " if ($null -ne $env:INTEGRATION_IN_CONTAINER) { Write-Host -ForegroundColor Green "INFO: Integration tests being run inside a container" # Note we talk back through the containers gateway address # And the ridiculous lengths we have to go to get the default gateway address... (GetNetIPConfiguration doesn't work in nanoserver) # I just could not get the escaping to work in a single command, so output $c to a file and run that in the container instead... # Not the prettiest, but it works. $c | Out-File -Force "$env:TEMP\binary\runIntegrationCLI.ps1" $Duration= $(Measure-Command { & docker run ` --rm ` -e c=$c ` --workdir "c`:\gopath\src\github.com\docker\docker\integration-cli" ` -v "$env:TEMP\binary`:c:\target" ` docker ` "`$env`:PATH`='c`:\target;'+`$env:PATH`; `$env:DOCKER_HOST`='tcp`://'+(ipconfig | select -last 1).Substring(39)+'`:2357'; c:\target\runIntegrationCLI.ps1" | Out-Host } ) } else { $env:DOCKER_HOST=$DASHH_CUT $env:PATH="$env:TEMP\binary;$env:PATH;" # Force to use the test binaries, not the host ones. $env:GO111MODULE="off" Write-Host -ForegroundColor Green "INFO: DOCKER_HOST at $DASHH_CUT" $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Cyan "INFO: Integration API tests being run from the host:" $start=(Get-Date); Invoke-Expression ".\hack\make.ps1 -TestIntegration"; $Duration=New-Timespan -Start $start -End (Get-Date) $IntTestsRunResult = $LastExitCode $ErrorActionPreference = "Stop" if (-not($IntTestsRunResult -eq 0)) { Throw "ERROR: Integration API tests failed at $(Get-Date). Duration`:$Duration" } $ErrorActionPreference = "SilentlyContinue" Write-Host -ForegroundColor Green "INFO: Integration CLI tests being run from the host:" Write-Host -ForegroundColor Green "INFO: $c" Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker\integration-cli" # Explicit to not use measure-command otherwise don't get output as it goes $start=(Get-Date); Invoke-Expression $c; $Duration=New-Timespan -Start $start -End (Get-Date) } $ErrorActionPreference = "Stop" if (-not($LastExitCode -eq 0)) { Throw "ERROR: Integration CLI tests failed at $(Get-Date). Duration`:$Duration" } Write-Host -ForegroundColor Green "INFO: Integration tests ended at $(Get-Date). Duration`:$Duration" } else { Write-Host -ForegroundColor Magenta "WARN: Skipping integration tests" } # Docker info now to get counts (after or if jjh/containercounts is merged) if ($daemonStarted -eq 1) { Write-Host -ForegroundColor Green "INFO: Docker info of the daemon under test at end of run" Write-Host $ErrorActionPreference = "SilentlyContinue" & "$env:TEMP\binary\docker-$COMMITHASH" "-H=$($DASHH_CUT)" info $ErrorActionPreference = "Stop" if ($LastExitCode -ne 0) { Throw "ERROR: The daemon under test does not appear to be running." } Write-Host } # Stop the daemon under test if (Test-Path "$env:TEMP\docker.pid") { $p=Get-Content "$env:TEMP\docker.pid" -raw if (($null -ne $p) -and ($daemonStarted -eq 1)) { Write-Host -ForegroundColor green "INFO: Stopping daemon under test" taskkill -f -t -pid $p #sleep 5 } Remove-Item "$env:TEMP\docker.pid" -force -ErrorAction SilentlyContinue } # Stop the tail process (if started) if ($null -ne $tail) { Write-Host -ForegroundColor green "INFO: Stop tailing logs of the daemon under tests" Stop-Process -InputObject $tail -Force } Write-Host -ForegroundColor Green "INFO: executeCI.ps1 Completed successfully at $(Get-Date)." } Catch [Exception] { $FinallyColour="Red" Write-Host -ForegroundColor Red ("`r`n`r`nERROR: Failed '$_' at $(Get-Date)") Write-Host -ForegroundColor Red ($_.InvocationInfo.PositionMessage) Write-Host "`n`n" # Exit to ensure Jenkins captures it. Don't do this in the ISE or interactive Powershell - they will catch the Throw onwards. if ( ([bool]([Environment]::GetCommandLineArgs() -Like '*-NonInteractive*')) -and ` ([bool]([Environment]::GetCommandLineArgs() -NotLike "*Powershell_ISE.exe*"))) { exit 1 } Throw $_ } Finally { $ErrorActionPreference="SilentlyContinue" $global:ProgressPreference=$origProgressPreference Write-Host -ForegroundColor Green "INFO: Tidying up at end of run" # Restore the path if ($null -ne $origPath) { $env:PATH=$origPath } # Restore the DOCKER_HOST if ($null -ne $origDOCKER_HOST) { $env:DOCKER_HOST=$origDOCKER_HOST } # Restore the GOROOT and GOPATH variables if ($null -ne $origGOROOT) { $env:GOROOT=$origGOROOT } if ($null -ne $origGOPATH) { $env:GOPATH=$origGOPATH } # Dump the daemon log. This will include any possible panic stack in the .err. if (($daemonStarted -eq 1) -and ($(Get-Item "$env:TEMP\dut.err").Length -gt 0)) { Write-Host -ForegroundColor Cyan "----------- DAEMON LOG ------------" Get-Content "$env:TEMP\dut.err" -ErrorAction SilentlyContinue | Write-Host -ForegroundColor Cyan Write-Host -ForegroundColor Cyan "----------- END DAEMON LOG --------" } # Save the daemon under test log if ($daemonStarted -eq 1) { Set-Location "$env:SOURCES_DRIVE`:\$env:SOURCES_SUBDIR\src\github.com\docker\docker" Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.out) to bundles\CIDUT.out" Copy-Item "$env:TEMP\dut.out" "bundles\CIDUT.out" -Force -ErrorAction SilentlyContinue Write-Host -ForegroundColor Green "INFO: Saving daemon under test log ($env:TEMP\dut.err) to bundles\CIDUT.err" Copy-Item "$env:TEMP\dut.err" "bundles\CIDUT.err" -Force -ErrorAction SilentlyContinue } Set-Location "$env:SOURCES_DRIVE\$env:SOURCES_SUBDIR" -ErrorAction SilentlyContinue Nuke-Everything # Restore the TEMP path if ($null -ne $TEMPORIG) { $env:TEMP="$TEMPORIG" } $Dur=New-TimeSpan -Start $StartTime -End $(Get-Date) Write-Host -ForegroundColor $FinallyColour "`nINFO: executeCI.ps1 exiting at $(date). Duration $dur`n" }
thaJeztah
3ad9549e70bdf45b40c6332b221cd5c7fd635524
51b06c6795160d8a1ba05d05d6491df7588b2957
Hmm I think that was when I fixed the alias, but I see `Foreach` does not have a capital here either; must've overlooked that when extracting this from the other PR
thaJeztah
4,521
moby/moby
42,683
Remove LCOW (step 6)
Splitting off more bits from https://github.com/moby/moby/pull/42170
null
2021-07-27 11:33:51+00:00
2021-07-29 18:34:29+00:00
oci/defaults.go
package oci // import "github.com/docker/docker/oci" import ( "os" "runtime" "github.com/docker/docker/oci/caps" specs "github.com/opencontainers/runtime-spec/specs-go" ) func iPtr(i int64) *int64 { return &i } func u32Ptr(i int64) *uint32 { u := uint32(i); return &u } func fmPtr(i int64) *os.FileMode { fm := os.FileMode(i); return &fm } // DefaultSpec returns the default spec used by docker for the current Platform func DefaultSpec() specs.Spec { return DefaultOSSpec(runtime.GOOS) } // DefaultOSSpec returns the spec for a given OS func DefaultOSSpec(osName string) specs.Spec { if osName == "windows" { return DefaultWindowsSpec() } return DefaultLinuxSpec() } // DefaultWindowsSpec create a default spec for running Windows containers func DefaultWindowsSpec() specs.Spec { return specs.Spec{ Version: specs.Version, Windows: &specs.Windows{}, Process: &specs.Process{}, Root: &specs.Root{}, } } // DefaultLinuxSpec create a default spec for running Linux containers func DefaultLinuxSpec() specs.Spec { s := specs.Spec{ Version: specs.Version, Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{ Bounding: caps.DefaultCapabilities(), Permitted: caps.DefaultCapabilities(), Inheritable: caps.DefaultCapabilities(), Effective: caps.DefaultCapabilities(), }, }, Root: &specs.Root{}, } s.Mounts = []specs.Mount{ { Destination: "/proc", Type: "proc", Source: "proc", Options: []string{"nosuid", "noexec", "nodev"}, }, { Destination: "/dev", Type: "tmpfs", Source: "tmpfs", Options: []string{"nosuid", "strictatime", "mode=755", "size=65536k"}, }, { Destination: "/dev/pts", Type: "devpts", Source: "devpts", Options: []string{"nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5"}, }, { Destination: "/sys", Type: "sysfs", Source: "sysfs", Options: []string{"nosuid", "noexec", "nodev", "ro"}, }, { Destination: "/sys/fs/cgroup", Type: "cgroup", Source: "cgroup", Options: []string{"ro", "nosuid", "noexec", "nodev"}, }, { Destination: "/dev/mqueue", Type: "mqueue", Source: "mqueue", Options: []string{"nosuid", "noexec", "nodev"}, }, { Destination: "/dev/shm", Type: "tmpfs", Source: "shm", Options: []string{"nosuid", "noexec", "nodev", "mode=1777"}, }, } s.Linux = &specs.Linux{ MaskedPaths: []string{ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", }, ReadonlyPaths: []string{ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger", }, Namespaces: []specs.LinuxNamespace{ {Type: "mount"}, {Type: "network"}, {Type: "uts"}, {Type: "pid"}, {Type: "ipc"}, }, // Devices implicitly contains the following devices: // null, zero, full, random, urandom, tty, console, and ptmx. // ptmx is a bind mount or symlink of the container's ptmx. // See also: https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#default-devices Devices: []specs.LinuxDevice{}, Resources: &specs.LinuxResources{ Devices: []specs.LinuxDeviceCgroup{ { Allow: false, Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(1), Minor: iPtr(5), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(1), Minor: iPtr(3), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(1), Minor: iPtr(9), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(1), Minor: iPtr(8), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(5), Minor: iPtr(0), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(5), Minor: iPtr(1), Access: "rwm", }, { Allow: false, Type: "c", Major: iPtr(10), Minor: iPtr(229), Access: "rwm", }, }, }, } // For LCOW support, populate a blank Windows spec if runtime.GOOS == "windows" { s.Windows = &specs.Windows{} } return s }
package oci // import "github.com/docker/docker/oci" import ( "os" "runtime" "github.com/docker/docker/oci/caps" specs "github.com/opencontainers/runtime-spec/specs-go" ) func iPtr(i int64) *int64 { return &i } func u32Ptr(i int64) *uint32 { u := uint32(i); return &u } func fmPtr(i int64) *os.FileMode { fm := os.FileMode(i); return &fm } // DefaultSpec returns the default spec used by docker for the current Platform func DefaultSpec() specs.Spec { if runtime.GOOS == "windows" { return DefaultWindowsSpec() } return DefaultLinuxSpec() } // DefaultWindowsSpec create a default spec for running Windows containers func DefaultWindowsSpec() specs.Spec { return specs.Spec{ Version: specs.Version, Windows: &specs.Windows{}, Process: &specs.Process{}, Root: &specs.Root{}, } } // DefaultLinuxSpec create a default spec for running Linux containers func DefaultLinuxSpec() specs.Spec { return specs.Spec{ Version: specs.Version, Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{ Bounding: caps.DefaultCapabilities(), Permitted: caps.DefaultCapabilities(), Inheritable: caps.DefaultCapabilities(), Effective: caps.DefaultCapabilities(), }, }, Root: &specs.Root{}, Mounts: []specs.Mount{ { Destination: "/proc", Type: "proc", Source: "proc", Options: []string{"nosuid", "noexec", "nodev"}, }, { Destination: "/dev", Type: "tmpfs", Source: "tmpfs", Options: []string{"nosuid", "strictatime", "mode=755", "size=65536k"}, }, { Destination: "/dev/pts", Type: "devpts", Source: "devpts", Options: []string{"nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5"}, }, { Destination: "/sys", Type: "sysfs", Source: "sysfs", Options: []string{"nosuid", "noexec", "nodev", "ro"}, }, { Destination: "/sys/fs/cgroup", Type: "cgroup", Source: "cgroup", Options: []string{"ro", "nosuid", "noexec", "nodev"}, }, { Destination: "/dev/mqueue", Type: "mqueue", Source: "mqueue", Options: []string{"nosuid", "noexec", "nodev"}, }, { Destination: "/dev/shm", Type: "tmpfs", Source: "shm", Options: []string{"nosuid", "noexec", "nodev", "mode=1777"}, }, }, Linux: &specs.Linux{ MaskedPaths: []string{ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", }, ReadonlyPaths: []string{ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger", }, Namespaces: []specs.LinuxNamespace{ {Type: "mount"}, {Type: "network"}, {Type: "uts"}, {Type: "pid"}, {Type: "ipc"}, }, // Devices implicitly contains the following devices: // null, zero, full, random, urandom, tty, console, and ptmx. // ptmx is a bind mount or symlink of the container's ptmx. // See also: https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#default-devices Devices: []specs.LinuxDevice{}, Resources: &specs.LinuxResources{ Devices: []specs.LinuxDeviceCgroup{ { Allow: false, Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(1), Minor: iPtr(5), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(1), Minor: iPtr(3), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(1), Minor: iPtr(9), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(1), Minor: iPtr(8), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(5), Minor: iPtr(0), Access: "rwm", }, { Allow: true, Type: "c", Major: iPtr(5), Minor: iPtr(1), Access: "rwm", }, { Allow: false, Type: "c", Major: iPtr(10), Minor: iPtr(229), Access: "rwm", }, }, }, }, } }
thaJeztah
3ad9549e70bdf45b40c6332b221cd5c7fd635524
51b06c6795160d8a1ba05d05d6491df7588b2957
Changes in this file are easiest reviewed with "ignore whitespace"; https://github.com/moby/moby/pull/42683/files?w=1
thaJeztah
4,522
moby/moby
42,676
fileutils: Fix incorrect handling of "**/foo" pattern
`(*PatternMatcher).Matches` includes a special case for when the pattern matches a parent dir, even though it doesn't match the current path. However, it assumes that the parent dir which would match the pattern must have the same number of separators as the pattern itself. This doesn't hold true with a patern like `**/foo`. A file `foo/bar` would have `len(parentPathDirs) == 1`, which is less than the number of path `len(pattern.dirs) == 2`... therefore this check would be skipped. Given that `**/foo` matches `foo`, I think it's a bug that the "parent subdir matches" check is being skipped in this case. It seems safer to loop over the parent subdirs and check each against the pattern. It's possible there is a safe optimization to check only a certain subset, but the existing logic seems unsafe. This was found while using the `IncludePatterns` feature of BuildKit's "copy" op. - Fixes: #41433 - related: https://github.com/moby/moby/issues/40319 cc @coryb @tonistiigi
null
2021-07-26 18:36:03+00:00
2021-08-17 02:58:55+00:00
pkg/fileutils/fileutils.go
package fileutils // import "github.com/docker/docker/pkg/fileutils" import ( "errors" "fmt" "io" "os" "path/filepath" "regexp" "strings" "text/scanner" ) // PatternMatcher allows checking paths against a list of patterns type PatternMatcher struct { patterns []*Pattern exclusions bool } // NewPatternMatcher creates a new matcher object for specific patterns that can // be used later to match against patterns against paths func NewPatternMatcher(patterns []string) (*PatternMatcher, error) { pm := &PatternMatcher{ patterns: make([]*Pattern, 0, len(patterns)), } for _, p := range patterns { // Eliminate leading and trailing whitespace. p = strings.TrimSpace(p) if p == "" { continue } p = filepath.Clean(p) newp := &Pattern{} if p[0] == '!' { if len(p) == 1 { return nil, errors.New("illegal exclusion pattern: \"!\"") } newp.exclusion = true p = p[1:] pm.exclusions = true } // Do some syntax checking on the pattern. // filepath's Match() has some really weird rules that are inconsistent // so instead of trying to dup their logic, just call Match() for its // error state and if there is an error in the pattern return it. // If this becomes an issue we can remove this since its really only // needed in the error (syntax) case - which isn't really critical. if _, err := filepath.Match(p, "."); err != nil { return nil, err } newp.cleanedPattern = p newp.dirs = strings.Split(p, string(os.PathSeparator)) pm.patterns = append(pm.patterns, newp) } return pm, nil } // Matches matches path against all the patterns. Matches is not safe to be // called concurrently func (pm *PatternMatcher) Matches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { negative := false if pattern.exclusion { negative = true } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. if len(pattern.dirs) <= len(parentPathDirs) { match, _ = pattern.match(strings.Join(parentPathDirs[:len(pattern.dirs)], string(os.PathSeparator))) } } if match { matched = !negative } } return matched, nil } // Exclusions returns true if any of the patterns define exclusions func (pm *PatternMatcher) Exclusions() bool { return pm.exclusions } // Patterns returns array of active patterns func (pm *PatternMatcher) Patterns() []*Pattern { return pm.patterns } // Pattern defines a single regexp used to filter file paths. type Pattern struct { cleanedPattern string dirs []string regexp *regexp.Regexp exclusion bool } func (p *Pattern) String() string { return p.cleanedPattern } // Exclusion returns true if this pattern defines exclusion func (p *Pattern) Exclusion() bool { return p.exclusion } func (p *Pattern) match(path string) (bool, error) { if p.regexp == nil { if err := p.compile(); err != nil { return false, filepath.ErrBadPattern } } b := p.regexp.MatchString(path) return b, nil } func (p *Pattern) compile() error { regStr := "^" pattern := p.cleanedPattern // Go through the pattern and convert it to a regexp. // We use a scanner so we can support utf-8 chars. var scan scanner.Scanner scan.Init(strings.NewReader(pattern)) sl := string(os.PathSeparator) escSL := sl if sl == `\` { escSL += `\` } for scan.Peek() != scanner.EOF { ch := scan.Next() if ch == '*' { if scan.Peek() == '*' { // is some flavor of "**" scan.Next() // Treat **/ as ** so eat the "/" if string(scan.Peek()) == sl { scan.Next() } if scan.Peek() == scanner.EOF { // is "**EOF" - to align with .gitignore just accept all regStr += ".*" } else { // is "**" // Note that this allows for any # of /'s (even 0) because // the .* will eat everything, even /'s regStr += "(.*" + escSL + ")?" } } else { // is "*" so map it to anything but "/" regStr += "[^" + escSL + "]*" } } else if ch == '?' { // "?" is any char except "/" regStr += "[^" + escSL + "]" } else if ch == '.' || ch == '$' { // Escape some regexp special chars that have no meaning // in golang's filepath.Match regStr += `\` + string(ch) } else if ch == '\\' { // escape next char. Note that a trailing \ in the pattern // will be left alone (but need to escape it) if sl == `\` { // On windows map "\" to "\\", meaning an escaped backslash, // and then just continue because filepath.Match on // Windows doesn't allow escaping at all regStr += escSL continue } if scan.Peek() != scanner.EOF { regStr += `\` + string(scan.Next()) } else { regStr += `\` } } else { regStr += string(ch) } } regStr += "$" re, err := regexp.Compile(regStr) if err != nil { return err } p.regexp = re return nil } // Matches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. func Matches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.Matches(file) } // CopyFile copies from src to dst until either EOF is reached // on src or an error occurs. It verifies src exists and removes // the dst if it exists. func CopyFile(src, dst string) (int64, error) { cleanSrc := filepath.Clean(src) cleanDst := filepath.Clean(dst) if cleanSrc == cleanDst { return 0, nil } sf, err := os.Open(cleanSrc) if err != nil { return 0, err } defer sf.Close() if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { return 0, err } df, err := os.Create(cleanDst) if err != nil { return 0, err } defer df.Close() return io.Copy(df, sf) } // ReadSymlinkedDirectory returns the target directory of a symlink. // The target of the symbolic link may not be a file. func ReadSymlinkedDirectory(path string) (string, error) { var realPath string var err error if realPath, err = filepath.Abs(path); err != nil { return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) } if realPath, err = filepath.EvalSymlinks(realPath); err != nil { return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) } realPathInfo, err := os.Stat(realPath) if err != nil { return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) } if !realPathInfo.Mode().IsDir() { return "", fmt.Errorf("canonical path points to a file '%s'", realPath) } return realPath, nil } // CreateIfNotExists creates a file or a directory only if it does not already exist. func CreateIfNotExists(path string, isDir bool) error { if _, err := os.Stat(path); err != nil { if os.IsNotExist(err) { if isDir { return os.MkdirAll(path, 0755) } if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_CREATE, 0755) if err != nil { return err } f.Close() } } return nil }
package fileutils // import "github.com/docker/docker/pkg/fileutils" import ( "errors" "fmt" "io" "os" "path/filepath" "regexp" "strings" "text/scanner" ) // PatternMatcher allows checking paths against a list of patterns type PatternMatcher struct { patterns []*Pattern exclusions bool } // NewPatternMatcher creates a new matcher object for specific patterns that can // be used later to match against patterns against paths func NewPatternMatcher(patterns []string) (*PatternMatcher, error) { pm := &PatternMatcher{ patterns: make([]*Pattern, 0, len(patterns)), } for _, p := range patterns { // Eliminate leading and trailing whitespace. p = strings.TrimSpace(p) if p == "" { continue } p = filepath.Clean(p) newp := &Pattern{} if p[0] == '!' { if len(p) == 1 { return nil, errors.New("illegal exclusion pattern: \"!\"") } newp.exclusion = true p = p[1:] pm.exclusions = true } // Do some syntax checking on the pattern. // filepath's Match() has some really weird rules that are inconsistent // so instead of trying to dup their logic, just call Match() for its // error state and if there is an error in the pattern return it. // If this becomes an issue we can remove this since its really only // needed in the error (syntax) case - which isn't really critical. if _, err := filepath.Match(p, "."); err != nil { return nil, err } newp.cleanedPattern = p newp.dirs = strings.Split(p, string(os.PathSeparator)) pm.patterns = append(pm.patterns, newp) } return pm, nil } // Matches returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // The "file" argument should be a slash-delimited path. // // Matches is not safe to call concurrently. // // This implementation is buggy (it only checks a single parent dir against the // pattern) and will be removed soon. Use either MatchesOrParentMatches or // MatchesUsingParentResult instead. func (pm *PatternMatcher) Matches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. if len(pattern.dirs) <= len(parentPathDirs) { match, _ = pattern.match(strings.Join(parentPathDirs[:len(pattern.dirs)], string(os.PathSeparator))) } } if match { matched = !pattern.exclusion } } return matched, nil } // MatchesOrParentMatches returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // The "file" argument should be a slash-delimited path. // // Matches is not safe to call concurrently. func (pm *PatternMatcher) MatchesOrParentMatches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. for i := range parentPathDirs { match, _ = pattern.match(strings.Join(parentPathDirs[:i+1], string(os.PathSeparator))) if match { break } } } if match { matched = !pattern.exclusion } } return matched, nil } // MatchesUsingParentResult returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. The functionality is // the same as Matches, but as an optimization, the caller keeps track of // whether the parent directory matched. // // The "file" argument should be a slash-delimited path. // // MatchesUsingParentResult is not safe to call concurrently. func (pm *PatternMatcher) MatchesUsingParentResult(file string, parentMatched bool) (bool, error) { matched := parentMatched file = filepath.FromSlash(file) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if match { matched = !pattern.exclusion } } return matched, nil } // Exclusions returns true if any of the patterns define exclusions func (pm *PatternMatcher) Exclusions() bool { return pm.exclusions } // Patterns returns array of active patterns func (pm *PatternMatcher) Patterns() []*Pattern { return pm.patterns } // Pattern defines a single regexp used to filter file paths. type Pattern struct { cleanedPattern string dirs []string regexp *regexp.Regexp exclusion bool } func (p *Pattern) String() string { return p.cleanedPattern } // Exclusion returns true if this pattern defines exclusion func (p *Pattern) Exclusion() bool { return p.exclusion } func (p *Pattern) match(path string) (bool, error) { if p.regexp == nil { if err := p.compile(); err != nil { return false, filepath.ErrBadPattern } } b := p.regexp.MatchString(path) return b, nil } func (p *Pattern) compile() error { regStr := "^" pattern := p.cleanedPattern // Go through the pattern and convert it to a regexp. // We use a scanner so we can support utf-8 chars. var scan scanner.Scanner scan.Init(strings.NewReader(pattern)) sl := string(os.PathSeparator) escSL := sl if sl == `\` { escSL += `\` } for scan.Peek() != scanner.EOF { ch := scan.Next() if ch == '*' { if scan.Peek() == '*' { // is some flavor of "**" scan.Next() // Treat **/ as ** so eat the "/" if string(scan.Peek()) == sl { scan.Next() } if scan.Peek() == scanner.EOF { // is "**EOF" - to align with .gitignore just accept all regStr += ".*" } else { // is "**" // Note that this allows for any # of /'s (even 0) because // the .* will eat everything, even /'s regStr += "(.*" + escSL + ")?" } } else { // is "*" so map it to anything but "/" regStr += "[^" + escSL + "]*" } } else if ch == '?' { // "?" is any char except "/" regStr += "[^" + escSL + "]" } else if ch == '.' || ch == '$' { // Escape some regexp special chars that have no meaning // in golang's filepath.Match regStr += `\` + string(ch) } else if ch == '\\' { // escape next char. Note that a trailing \ in the pattern // will be left alone (but need to escape it) if sl == `\` { // On windows map "\" to "\\", meaning an escaped backslash, // and then just continue because filepath.Match on // Windows doesn't allow escaping at all regStr += escSL continue } if scan.Peek() != scanner.EOF { regStr += `\` + string(scan.Next()) } else { regStr += `\` } } else { regStr += string(ch) } } regStr += "$" re, err := regexp.Compile(regStr) if err != nil { return err } p.regexp = re return nil } // Matches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // This implementation is buggy (it only checks a single parent dir against the // pattern) and will be removed soon. Use MatchesOrParentMatches instead. func Matches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.Matches(file) } // MatchesOrParentMatches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. func MatchesOrParentMatches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.MatchesOrParentMatches(file) } // CopyFile copies from src to dst until either EOF is reached // on src or an error occurs. It verifies src exists and removes // the dst if it exists. func CopyFile(src, dst string) (int64, error) { cleanSrc := filepath.Clean(src) cleanDst := filepath.Clean(dst) if cleanSrc == cleanDst { return 0, nil } sf, err := os.Open(cleanSrc) if err != nil { return 0, err } defer sf.Close() if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { return 0, err } df, err := os.Create(cleanDst) if err != nil { return 0, err } defer df.Close() return io.Copy(df, sf) } // ReadSymlinkedDirectory returns the target directory of a symlink. // The target of the symbolic link may not be a file. func ReadSymlinkedDirectory(path string) (string, error) { var realPath string var err error if realPath, err = filepath.Abs(path); err != nil { return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) } if realPath, err = filepath.EvalSymlinks(realPath); err != nil { return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) } realPathInfo, err := os.Stat(realPath) if err != nil { return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) } if !realPathInfo.Mode().IsDir() { return "", fmt.Errorf("canonical path points to a file '%s'", realPath) } return realPath, nil } // CreateIfNotExists creates a file or a directory only if it does not already exist. func CreateIfNotExists(path string, isDir bool) error { if _, err := os.Stat(path); err != nil { if os.IsNotExist(err) { if isDir { return os.MkdirAll(path, 0755) } if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_CREATE, 0755) if err != nil { return err } f.Close() } } return nil }
aaronlehmann
385ddf6120629b72ed4d8acce4a7a5f74f81b88e
ba2adeebdb8d4ec442f1d3107405ae55d0c1a0c2
Could we rename this and remove the `range` loop. So all callers need to update and choose if they don't need to match parents or if they track the parents on the caller side.
tonistiigi
4,523
moby/moby
42,676
fileutils: Fix incorrect handling of "**/foo" pattern
`(*PatternMatcher).Matches` includes a special case for when the pattern matches a parent dir, even though it doesn't match the current path. However, it assumes that the parent dir which would match the pattern must have the same number of separators as the pattern itself. This doesn't hold true with a patern like `**/foo`. A file `foo/bar` would have `len(parentPathDirs) == 1`, which is less than the number of path `len(pattern.dirs) == 2`... therefore this check would be skipped. Given that `**/foo` matches `foo`, I think it's a bug that the "parent subdir matches" check is being skipped in this case. It seems safer to loop over the parent subdirs and check each against the pattern. It's possible there is a safe optimization to check only a certain subset, but the existing logic seems unsafe. This was found while using the `IncludePatterns` feature of BuildKit's "copy" op. - Fixes: #41433 - related: https://github.com/moby/moby/issues/40319 cc @coryb @tonistiigi
null
2021-07-26 18:36:03+00:00
2021-08-17 02:58:55+00:00
pkg/fileutils/fileutils.go
package fileutils // import "github.com/docker/docker/pkg/fileutils" import ( "errors" "fmt" "io" "os" "path/filepath" "regexp" "strings" "text/scanner" ) // PatternMatcher allows checking paths against a list of patterns type PatternMatcher struct { patterns []*Pattern exclusions bool } // NewPatternMatcher creates a new matcher object for specific patterns that can // be used later to match against patterns against paths func NewPatternMatcher(patterns []string) (*PatternMatcher, error) { pm := &PatternMatcher{ patterns: make([]*Pattern, 0, len(patterns)), } for _, p := range patterns { // Eliminate leading and trailing whitespace. p = strings.TrimSpace(p) if p == "" { continue } p = filepath.Clean(p) newp := &Pattern{} if p[0] == '!' { if len(p) == 1 { return nil, errors.New("illegal exclusion pattern: \"!\"") } newp.exclusion = true p = p[1:] pm.exclusions = true } // Do some syntax checking on the pattern. // filepath's Match() has some really weird rules that are inconsistent // so instead of trying to dup their logic, just call Match() for its // error state and if there is an error in the pattern return it. // If this becomes an issue we can remove this since its really only // needed in the error (syntax) case - which isn't really critical. if _, err := filepath.Match(p, "."); err != nil { return nil, err } newp.cleanedPattern = p newp.dirs = strings.Split(p, string(os.PathSeparator)) pm.patterns = append(pm.patterns, newp) } return pm, nil } // Matches matches path against all the patterns. Matches is not safe to be // called concurrently func (pm *PatternMatcher) Matches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { negative := false if pattern.exclusion { negative = true } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. if len(pattern.dirs) <= len(parentPathDirs) { match, _ = pattern.match(strings.Join(parentPathDirs[:len(pattern.dirs)], string(os.PathSeparator))) } } if match { matched = !negative } } return matched, nil } // Exclusions returns true if any of the patterns define exclusions func (pm *PatternMatcher) Exclusions() bool { return pm.exclusions } // Patterns returns array of active patterns func (pm *PatternMatcher) Patterns() []*Pattern { return pm.patterns } // Pattern defines a single regexp used to filter file paths. type Pattern struct { cleanedPattern string dirs []string regexp *regexp.Regexp exclusion bool } func (p *Pattern) String() string { return p.cleanedPattern } // Exclusion returns true if this pattern defines exclusion func (p *Pattern) Exclusion() bool { return p.exclusion } func (p *Pattern) match(path string) (bool, error) { if p.regexp == nil { if err := p.compile(); err != nil { return false, filepath.ErrBadPattern } } b := p.regexp.MatchString(path) return b, nil } func (p *Pattern) compile() error { regStr := "^" pattern := p.cleanedPattern // Go through the pattern and convert it to a regexp. // We use a scanner so we can support utf-8 chars. var scan scanner.Scanner scan.Init(strings.NewReader(pattern)) sl := string(os.PathSeparator) escSL := sl if sl == `\` { escSL += `\` } for scan.Peek() != scanner.EOF { ch := scan.Next() if ch == '*' { if scan.Peek() == '*' { // is some flavor of "**" scan.Next() // Treat **/ as ** so eat the "/" if string(scan.Peek()) == sl { scan.Next() } if scan.Peek() == scanner.EOF { // is "**EOF" - to align with .gitignore just accept all regStr += ".*" } else { // is "**" // Note that this allows for any # of /'s (even 0) because // the .* will eat everything, even /'s regStr += "(.*" + escSL + ")?" } } else { // is "*" so map it to anything but "/" regStr += "[^" + escSL + "]*" } } else if ch == '?' { // "?" is any char except "/" regStr += "[^" + escSL + "]" } else if ch == '.' || ch == '$' { // Escape some regexp special chars that have no meaning // in golang's filepath.Match regStr += `\` + string(ch) } else if ch == '\\' { // escape next char. Note that a trailing \ in the pattern // will be left alone (but need to escape it) if sl == `\` { // On windows map "\" to "\\", meaning an escaped backslash, // and then just continue because filepath.Match on // Windows doesn't allow escaping at all regStr += escSL continue } if scan.Peek() != scanner.EOF { regStr += `\` + string(scan.Next()) } else { regStr += `\` } } else { regStr += string(ch) } } regStr += "$" re, err := regexp.Compile(regStr) if err != nil { return err } p.regexp = re return nil } // Matches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. func Matches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.Matches(file) } // CopyFile copies from src to dst until either EOF is reached // on src or an error occurs. It verifies src exists and removes // the dst if it exists. func CopyFile(src, dst string) (int64, error) { cleanSrc := filepath.Clean(src) cleanDst := filepath.Clean(dst) if cleanSrc == cleanDst { return 0, nil } sf, err := os.Open(cleanSrc) if err != nil { return 0, err } defer sf.Close() if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { return 0, err } df, err := os.Create(cleanDst) if err != nil { return 0, err } defer df.Close() return io.Copy(df, sf) } // ReadSymlinkedDirectory returns the target directory of a symlink. // The target of the symbolic link may not be a file. func ReadSymlinkedDirectory(path string) (string, error) { var realPath string var err error if realPath, err = filepath.Abs(path); err != nil { return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) } if realPath, err = filepath.EvalSymlinks(realPath); err != nil { return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) } realPathInfo, err := os.Stat(realPath) if err != nil { return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) } if !realPathInfo.Mode().IsDir() { return "", fmt.Errorf("canonical path points to a file '%s'", realPath) } return realPath, nil } // CreateIfNotExists creates a file or a directory only if it does not already exist. func CreateIfNotExists(path string, isDir bool) error { if _, err := os.Stat(path); err != nil { if os.IsNotExist(err) { if isDir { return os.MkdirAll(path, 0755) } if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_CREATE, 0755) if err != nil { return err } f.Close() } } return nil }
package fileutils // import "github.com/docker/docker/pkg/fileutils" import ( "errors" "fmt" "io" "os" "path/filepath" "regexp" "strings" "text/scanner" ) // PatternMatcher allows checking paths against a list of patterns type PatternMatcher struct { patterns []*Pattern exclusions bool } // NewPatternMatcher creates a new matcher object for specific patterns that can // be used later to match against patterns against paths func NewPatternMatcher(patterns []string) (*PatternMatcher, error) { pm := &PatternMatcher{ patterns: make([]*Pattern, 0, len(patterns)), } for _, p := range patterns { // Eliminate leading and trailing whitespace. p = strings.TrimSpace(p) if p == "" { continue } p = filepath.Clean(p) newp := &Pattern{} if p[0] == '!' { if len(p) == 1 { return nil, errors.New("illegal exclusion pattern: \"!\"") } newp.exclusion = true p = p[1:] pm.exclusions = true } // Do some syntax checking on the pattern. // filepath's Match() has some really weird rules that are inconsistent // so instead of trying to dup their logic, just call Match() for its // error state and if there is an error in the pattern return it. // If this becomes an issue we can remove this since its really only // needed in the error (syntax) case - which isn't really critical. if _, err := filepath.Match(p, "."); err != nil { return nil, err } newp.cleanedPattern = p newp.dirs = strings.Split(p, string(os.PathSeparator)) pm.patterns = append(pm.patterns, newp) } return pm, nil } // Matches returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // The "file" argument should be a slash-delimited path. // // Matches is not safe to call concurrently. // // This implementation is buggy (it only checks a single parent dir against the // pattern) and will be removed soon. Use either MatchesOrParentMatches or // MatchesUsingParentResult instead. func (pm *PatternMatcher) Matches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. if len(pattern.dirs) <= len(parentPathDirs) { match, _ = pattern.match(strings.Join(parentPathDirs[:len(pattern.dirs)], string(os.PathSeparator))) } } if match { matched = !pattern.exclusion } } return matched, nil } // MatchesOrParentMatches returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // The "file" argument should be a slash-delimited path. // // Matches is not safe to call concurrently. func (pm *PatternMatcher) MatchesOrParentMatches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. for i := range parentPathDirs { match, _ = pattern.match(strings.Join(parentPathDirs[:i+1], string(os.PathSeparator))) if match { break } } } if match { matched = !pattern.exclusion } } return matched, nil } // MatchesUsingParentResult returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. The functionality is // the same as Matches, but as an optimization, the caller keeps track of // whether the parent directory matched. // // The "file" argument should be a slash-delimited path. // // MatchesUsingParentResult is not safe to call concurrently. func (pm *PatternMatcher) MatchesUsingParentResult(file string, parentMatched bool) (bool, error) { matched := parentMatched file = filepath.FromSlash(file) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if match { matched = !pattern.exclusion } } return matched, nil } // Exclusions returns true if any of the patterns define exclusions func (pm *PatternMatcher) Exclusions() bool { return pm.exclusions } // Patterns returns array of active patterns func (pm *PatternMatcher) Patterns() []*Pattern { return pm.patterns } // Pattern defines a single regexp used to filter file paths. type Pattern struct { cleanedPattern string dirs []string regexp *regexp.Regexp exclusion bool } func (p *Pattern) String() string { return p.cleanedPattern } // Exclusion returns true if this pattern defines exclusion func (p *Pattern) Exclusion() bool { return p.exclusion } func (p *Pattern) match(path string) (bool, error) { if p.regexp == nil { if err := p.compile(); err != nil { return false, filepath.ErrBadPattern } } b := p.regexp.MatchString(path) return b, nil } func (p *Pattern) compile() error { regStr := "^" pattern := p.cleanedPattern // Go through the pattern and convert it to a regexp. // We use a scanner so we can support utf-8 chars. var scan scanner.Scanner scan.Init(strings.NewReader(pattern)) sl := string(os.PathSeparator) escSL := sl if sl == `\` { escSL += `\` } for scan.Peek() != scanner.EOF { ch := scan.Next() if ch == '*' { if scan.Peek() == '*' { // is some flavor of "**" scan.Next() // Treat **/ as ** so eat the "/" if string(scan.Peek()) == sl { scan.Next() } if scan.Peek() == scanner.EOF { // is "**EOF" - to align with .gitignore just accept all regStr += ".*" } else { // is "**" // Note that this allows for any # of /'s (even 0) because // the .* will eat everything, even /'s regStr += "(.*" + escSL + ")?" } } else { // is "*" so map it to anything but "/" regStr += "[^" + escSL + "]*" } } else if ch == '?' { // "?" is any char except "/" regStr += "[^" + escSL + "]" } else if ch == '.' || ch == '$' { // Escape some regexp special chars that have no meaning // in golang's filepath.Match regStr += `\` + string(ch) } else if ch == '\\' { // escape next char. Note that a trailing \ in the pattern // will be left alone (but need to escape it) if sl == `\` { // On windows map "\" to "\\", meaning an escaped backslash, // and then just continue because filepath.Match on // Windows doesn't allow escaping at all regStr += escSL continue } if scan.Peek() != scanner.EOF { regStr += `\` + string(scan.Next()) } else { regStr += `\` } } else { regStr += string(ch) } } regStr += "$" re, err := regexp.Compile(regStr) if err != nil { return err } p.regexp = re return nil } // Matches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // This implementation is buggy (it only checks a single parent dir against the // pattern) and will be removed soon. Use MatchesOrParentMatches instead. func Matches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.Matches(file) } // MatchesOrParentMatches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. func MatchesOrParentMatches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.MatchesOrParentMatches(file) } // CopyFile copies from src to dst until either EOF is reached // on src or an error occurs. It verifies src exists and removes // the dst if it exists. func CopyFile(src, dst string) (int64, error) { cleanSrc := filepath.Clean(src) cleanDst := filepath.Clean(dst) if cleanSrc == cleanDst { return 0, nil } sf, err := os.Open(cleanSrc) if err != nil { return 0, err } defer sf.Close() if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { return 0, err } df, err := os.Create(cleanDst) if err != nil { return 0, err } defer df.Close() return io.Copy(df, sf) } // ReadSymlinkedDirectory returns the target directory of a symlink. // The target of the symbolic link may not be a file. func ReadSymlinkedDirectory(path string) (string, error) { var realPath string var err error if realPath, err = filepath.Abs(path); err != nil { return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) } if realPath, err = filepath.EvalSymlinks(realPath); err != nil { return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) } realPathInfo, err := os.Stat(realPath) if err != nil { return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) } if !realPathInfo.Mode().IsDir() { return "", fmt.Errorf("canonical path points to a file '%s'", realPath) } return realPath, nil } // CreateIfNotExists creates a file or a directory only if it does not already exist. func CreateIfNotExists(path string, isDir bool) error { if _, err := os.Stat(path); err != nil { if os.IsNotExist(err) { if isDir { return os.MkdirAll(path, 0755) } if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_CREATE, 0755) if err != nil { return err } f.Close() } } return nil }
aaronlehmann
385ddf6120629b72ed4d8acce4a7a5f74f81b88e
ba2adeebdb8d4ec442f1d3107405ae55d0c1a0c2
What's the use case for not matching parent dirs? I think that would give incorrect behavior with a number of patterns. Are there cases where we know in advance we wouldn't encounter such patterns? Removing `Matches` would complicate code that isn't doing a walk. For example, `removeDockerfile` currently uses `Matches`, and it would need similar logic to the loop in `Matches`. Even `TarWithOptions`, which I modified to use `MatchesUsingParentResult`, uses `Matches` as a convenience on the first file/dir it encounters matching the current `include` prefix.
aaronlehmann
4,524
moby/moby
42,676
fileutils: Fix incorrect handling of "**/foo" pattern
`(*PatternMatcher).Matches` includes a special case for when the pattern matches a parent dir, even though it doesn't match the current path. However, it assumes that the parent dir which would match the pattern must have the same number of separators as the pattern itself. This doesn't hold true with a patern like `**/foo`. A file `foo/bar` would have `len(parentPathDirs) == 1`, which is less than the number of path `len(pattern.dirs) == 2`... therefore this check would be skipped. Given that `**/foo` matches `foo`, I think it's a bug that the "parent subdir matches" check is being skipped in this case. It seems safer to loop over the parent subdirs and check each against the pattern. It's possible there is a safe optimization to check only a certain subset, but the existing logic seems unsafe. This was found while using the `IncludePatterns` feature of BuildKit's "copy" op. - Fixes: #41433 - related: https://github.com/moby/moby/issues/40319 cc @coryb @tonistiigi
null
2021-07-26 18:36:03+00:00
2021-08-17 02:58:55+00:00
pkg/fileutils/fileutils.go
package fileutils // import "github.com/docker/docker/pkg/fileutils" import ( "errors" "fmt" "io" "os" "path/filepath" "regexp" "strings" "text/scanner" ) // PatternMatcher allows checking paths against a list of patterns type PatternMatcher struct { patterns []*Pattern exclusions bool } // NewPatternMatcher creates a new matcher object for specific patterns that can // be used later to match against patterns against paths func NewPatternMatcher(patterns []string) (*PatternMatcher, error) { pm := &PatternMatcher{ patterns: make([]*Pattern, 0, len(patterns)), } for _, p := range patterns { // Eliminate leading and trailing whitespace. p = strings.TrimSpace(p) if p == "" { continue } p = filepath.Clean(p) newp := &Pattern{} if p[0] == '!' { if len(p) == 1 { return nil, errors.New("illegal exclusion pattern: \"!\"") } newp.exclusion = true p = p[1:] pm.exclusions = true } // Do some syntax checking on the pattern. // filepath's Match() has some really weird rules that are inconsistent // so instead of trying to dup their logic, just call Match() for its // error state and if there is an error in the pattern return it. // If this becomes an issue we can remove this since its really only // needed in the error (syntax) case - which isn't really critical. if _, err := filepath.Match(p, "."); err != nil { return nil, err } newp.cleanedPattern = p newp.dirs = strings.Split(p, string(os.PathSeparator)) pm.patterns = append(pm.patterns, newp) } return pm, nil } // Matches matches path against all the patterns. Matches is not safe to be // called concurrently func (pm *PatternMatcher) Matches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { negative := false if pattern.exclusion { negative = true } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. if len(pattern.dirs) <= len(parentPathDirs) { match, _ = pattern.match(strings.Join(parentPathDirs[:len(pattern.dirs)], string(os.PathSeparator))) } } if match { matched = !negative } } return matched, nil } // Exclusions returns true if any of the patterns define exclusions func (pm *PatternMatcher) Exclusions() bool { return pm.exclusions } // Patterns returns array of active patterns func (pm *PatternMatcher) Patterns() []*Pattern { return pm.patterns } // Pattern defines a single regexp used to filter file paths. type Pattern struct { cleanedPattern string dirs []string regexp *regexp.Regexp exclusion bool } func (p *Pattern) String() string { return p.cleanedPattern } // Exclusion returns true if this pattern defines exclusion func (p *Pattern) Exclusion() bool { return p.exclusion } func (p *Pattern) match(path string) (bool, error) { if p.regexp == nil { if err := p.compile(); err != nil { return false, filepath.ErrBadPattern } } b := p.regexp.MatchString(path) return b, nil } func (p *Pattern) compile() error { regStr := "^" pattern := p.cleanedPattern // Go through the pattern and convert it to a regexp. // We use a scanner so we can support utf-8 chars. var scan scanner.Scanner scan.Init(strings.NewReader(pattern)) sl := string(os.PathSeparator) escSL := sl if sl == `\` { escSL += `\` } for scan.Peek() != scanner.EOF { ch := scan.Next() if ch == '*' { if scan.Peek() == '*' { // is some flavor of "**" scan.Next() // Treat **/ as ** so eat the "/" if string(scan.Peek()) == sl { scan.Next() } if scan.Peek() == scanner.EOF { // is "**EOF" - to align with .gitignore just accept all regStr += ".*" } else { // is "**" // Note that this allows for any # of /'s (even 0) because // the .* will eat everything, even /'s regStr += "(.*" + escSL + ")?" } } else { // is "*" so map it to anything but "/" regStr += "[^" + escSL + "]*" } } else if ch == '?' { // "?" is any char except "/" regStr += "[^" + escSL + "]" } else if ch == '.' || ch == '$' { // Escape some regexp special chars that have no meaning // in golang's filepath.Match regStr += `\` + string(ch) } else if ch == '\\' { // escape next char. Note that a trailing \ in the pattern // will be left alone (but need to escape it) if sl == `\` { // On windows map "\" to "\\", meaning an escaped backslash, // and then just continue because filepath.Match on // Windows doesn't allow escaping at all regStr += escSL continue } if scan.Peek() != scanner.EOF { regStr += `\` + string(scan.Next()) } else { regStr += `\` } } else { regStr += string(ch) } } regStr += "$" re, err := regexp.Compile(regStr) if err != nil { return err } p.regexp = re return nil } // Matches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. func Matches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.Matches(file) } // CopyFile copies from src to dst until either EOF is reached // on src or an error occurs. It verifies src exists and removes // the dst if it exists. func CopyFile(src, dst string) (int64, error) { cleanSrc := filepath.Clean(src) cleanDst := filepath.Clean(dst) if cleanSrc == cleanDst { return 0, nil } sf, err := os.Open(cleanSrc) if err != nil { return 0, err } defer sf.Close() if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { return 0, err } df, err := os.Create(cleanDst) if err != nil { return 0, err } defer df.Close() return io.Copy(df, sf) } // ReadSymlinkedDirectory returns the target directory of a symlink. // The target of the symbolic link may not be a file. func ReadSymlinkedDirectory(path string) (string, error) { var realPath string var err error if realPath, err = filepath.Abs(path); err != nil { return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) } if realPath, err = filepath.EvalSymlinks(realPath); err != nil { return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) } realPathInfo, err := os.Stat(realPath) if err != nil { return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) } if !realPathInfo.Mode().IsDir() { return "", fmt.Errorf("canonical path points to a file '%s'", realPath) } return realPath, nil } // CreateIfNotExists creates a file or a directory only if it does not already exist. func CreateIfNotExists(path string, isDir bool) error { if _, err := os.Stat(path); err != nil { if os.IsNotExist(err) { if isDir { return os.MkdirAll(path, 0755) } if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_CREATE, 0755) if err != nil { return err } f.Close() } } return nil }
package fileutils // import "github.com/docker/docker/pkg/fileutils" import ( "errors" "fmt" "io" "os" "path/filepath" "regexp" "strings" "text/scanner" ) // PatternMatcher allows checking paths against a list of patterns type PatternMatcher struct { patterns []*Pattern exclusions bool } // NewPatternMatcher creates a new matcher object for specific patterns that can // be used later to match against patterns against paths func NewPatternMatcher(patterns []string) (*PatternMatcher, error) { pm := &PatternMatcher{ patterns: make([]*Pattern, 0, len(patterns)), } for _, p := range patterns { // Eliminate leading and trailing whitespace. p = strings.TrimSpace(p) if p == "" { continue } p = filepath.Clean(p) newp := &Pattern{} if p[0] == '!' { if len(p) == 1 { return nil, errors.New("illegal exclusion pattern: \"!\"") } newp.exclusion = true p = p[1:] pm.exclusions = true } // Do some syntax checking on the pattern. // filepath's Match() has some really weird rules that are inconsistent // so instead of trying to dup their logic, just call Match() for its // error state and if there is an error in the pattern return it. // If this becomes an issue we can remove this since its really only // needed in the error (syntax) case - which isn't really critical. if _, err := filepath.Match(p, "."); err != nil { return nil, err } newp.cleanedPattern = p newp.dirs = strings.Split(p, string(os.PathSeparator)) pm.patterns = append(pm.patterns, newp) } return pm, nil } // Matches returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // The "file" argument should be a slash-delimited path. // // Matches is not safe to call concurrently. // // This implementation is buggy (it only checks a single parent dir against the // pattern) and will be removed soon. Use either MatchesOrParentMatches or // MatchesUsingParentResult instead. func (pm *PatternMatcher) Matches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. if len(pattern.dirs) <= len(parentPathDirs) { match, _ = pattern.match(strings.Join(parentPathDirs[:len(pattern.dirs)], string(os.PathSeparator))) } } if match { matched = !pattern.exclusion } } return matched, nil } // MatchesOrParentMatches returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // The "file" argument should be a slash-delimited path. // // Matches is not safe to call concurrently. func (pm *PatternMatcher) MatchesOrParentMatches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. for i := range parentPathDirs { match, _ = pattern.match(strings.Join(parentPathDirs[:i+1], string(os.PathSeparator))) if match { break } } } if match { matched = !pattern.exclusion } } return matched, nil } // MatchesUsingParentResult returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. The functionality is // the same as Matches, but as an optimization, the caller keeps track of // whether the parent directory matched. // // The "file" argument should be a slash-delimited path. // // MatchesUsingParentResult is not safe to call concurrently. func (pm *PatternMatcher) MatchesUsingParentResult(file string, parentMatched bool) (bool, error) { matched := parentMatched file = filepath.FromSlash(file) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if match { matched = !pattern.exclusion } } return matched, nil } // Exclusions returns true if any of the patterns define exclusions func (pm *PatternMatcher) Exclusions() bool { return pm.exclusions } // Patterns returns array of active patterns func (pm *PatternMatcher) Patterns() []*Pattern { return pm.patterns } // Pattern defines a single regexp used to filter file paths. type Pattern struct { cleanedPattern string dirs []string regexp *regexp.Regexp exclusion bool } func (p *Pattern) String() string { return p.cleanedPattern } // Exclusion returns true if this pattern defines exclusion func (p *Pattern) Exclusion() bool { return p.exclusion } func (p *Pattern) match(path string) (bool, error) { if p.regexp == nil { if err := p.compile(); err != nil { return false, filepath.ErrBadPattern } } b := p.regexp.MatchString(path) return b, nil } func (p *Pattern) compile() error { regStr := "^" pattern := p.cleanedPattern // Go through the pattern and convert it to a regexp. // We use a scanner so we can support utf-8 chars. var scan scanner.Scanner scan.Init(strings.NewReader(pattern)) sl := string(os.PathSeparator) escSL := sl if sl == `\` { escSL += `\` } for scan.Peek() != scanner.EOF { ch := scan.Next() if ch == '*' { if scan.Peek() == '*' { // is some flavor of "**" scan.Next() // Treat **/ as ** so eat the "/" if string(scan.Peek()) == sl { scan.Next() } if scan.Peek() == scanner.EOF { // is "**EOF" - to align with .gitignore just accept all regStr += ".*" } else { // is "**" // Note that this allows for any # of /'s (even 0) because // the .* will eat everything, even /'s regStr += "(.*" + escSL + ")?" } } else { // is "*" so map it to anything but "/" regStr += "[^" + escSL + "]*" } } else if ch == '?' { // "?" is any char except "/" regStr += "[^" + escSL + "]" } else if ch == '.' || ch == '$' { // Escape some regexp special chars that have no meaning // in golang's filepath.Match regStr += `\` + string(ch) } else if ch == '\\' { // escape next char. Note that a trailing \ in the pattern // will be left alone (but need to escape it) if sl == `\` { // On windows map "\" to "\\", meaning an escaped backslash, // and then just continue because filepath.Match on // Windows doesn't allow escaping at all regStr += escSL continue } if scan.Peek() != scanner.EOF { regStr += `\` + string(scan.Next()) } else { regStr += `\` } } else { regStr += string(ch) } } regStr += "$" re, err := regexp.Compile(regStr) if err != nil { return err } p.regexp = re return nil } // Matches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // This implementation is buggy (it only checks a single parent dir against the // pattern) and will be removed soon. Use MatchesOrParentMatches instead. func Matches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.Matches(file) } // MatchesOrParentMatches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. func MatchesOrParentMatches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.MatchesOrParentMatches(file) } // CopyFile copies from src to dst until either EOF is reached // on src or an error occurs. It verifies src exists and removes // the dst if it exists. func CopyFile(src, dst string) (int64, error) { cleanSrc := filepath.Clean(src) cleanDst := filepath.Clean(dst) if cleanSrc == cleanDst { return 0, nil } sf, err := os.Open(cleanSrc) if err != nil { return 0, err } defer sf.Close() if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { return 0, err } df, err := os.Create(cleanDst) if err != nil { return 0, err } defer df.Close() return io.Copy(df, sf) } // ReadSymlinkedDirectory returns the target directory of a symlink. // The target of the symbolic link may not be a file. func ReadSymlinkedDirectory(path string) (string, error) { var realPath string var err error if realPath, err = filepath.Abs(path); err != nil { return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) } if realPath, err = filepath.EvalSymlinks(realPath); err != nil { return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) } realPathInfo, err := os.Stat(realPath) if err != nil { return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) } if !realPathInfo.Mode().IsDir() { return "", fmt.Errorf("canonical path points to a file '%s'", realPath) } return realPath, nil } // CreateIfNotExists creates a file or a directory only if it does not already exist. func CreateIfNotExists(path string, isDir bool) error { if _, err := os.Stat(path); err != nil { if os.IsNotExist(err) { if isDir { return os.MkdirAll(path, 0755) } if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_CREATE, 0755) if err != nil { return err } f.Close() } } return nil }
aaronlehmann
385ddf6120629b72ed4d8acce4a7a5f74f81b88e
ba2adeebdb8d4ec442f1d3107405ae55d0c1a0c2
I think it is unexpected that this function now has `m*n` complexity and this would make it sure it doesn't go unnoticed. If you think there are cases that need parent matches but would still prefer to use `Matches` (eg. they only call a single file anyway) maybe leave it as is but rename to `MatchesOrAnyParentMatches`.
tonistiigi
4,525
moby/moby
42,676
fileutils: Fix incorrect handling of "**/foo" pattern
`(*PatternMatcher).Matches` includes a special case for when the pattern matches a parent dir, even though it doesn't match the current path. However, it assumes that the parent dir which would match the pattern must have the same number of separators as the pattern itself. This doesn't hold true with a patern like `**/foo`. A file `foo/bar` would have `len(parentPathDirs) == 1`, which is less than the number of path `len(pattern.dirs) == 2`... therefore this check would be skipped. Given that `**/foo` matches `foo`, I think it's a bug that the "parent subdir matches" check is being skipped in this case. It seems safer to loop over the parent subdirs and check each against the pattern. It's possible there is a safe optimization to check only a certain subset, but the existing logic seems unsafe. This was found while using the `IncludePatterns` feature of BuildKit's "copy" op. - Fixes: #41433 - related: https://github.com/moby/moby/issues/40319 cc @coryb @tonistiigi
null
2021-07-26 18:36:03+00:00
2021-08-17 02:58:55+00:00
pkg/fileutils/fileutils.go
package fileutils // import "github.com/docker/docker/pkg/fileutils" import ( "errors" "fmt" "io" "os" "path/filepath" "regexp" "strings" "text/scanner" ) // PatternMatcher allows checking paths against a list of patterns type PatternMatcher struct { patterns []*Pattern exclusions bool } // NewPatternMatcher creates a new matcher object for specific patterns that can // be used later to match against patterns against paths func NewPatternMatcher(patterns []string) (*PatternMatcher, error) { pm := &PatternMatcher{ patterns: make([]*Pattern, 0, len(patterns)), } for _, p := range patterns { // Eliminate leading and trailing whitespace. p = strings.TrimSpace(p) if p == "" { continue } p = filepath.Clean(p) newp := &Pattern{} if p[0] == '!' { if len(p) == 1 { return nil, errors.New("illegal exclusion pattern: \"!\"") } newp.exclusion = true p = p[1:] pm.exclusions = true } // Do some syntax checking on the pattern. // filepath's Match() has some really weird rules that are inconsistent // so instead of trying to dup their logic, just call Match() for its // error state and if there is an error in the pattern return it. // If this becomes an issue we can remove this since its really only // needed in the error (syntax) case - which isn't really critical. if _, err := filepath.Match(p, "."); err != nil { return nil, err } newp.cleanedPattern = p newp.dirs = strings.Split(p, string(os.PathSeparator)) pm.patterns = append(pm.patterns, newp) } return pm, nil } // Matches matches path against all the patterns. Matches is not safe to be // called concurrently func (pm *PatternMatcher) Matches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { negative := false if pattern.exclusion { negative = true } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. if len(pattern.dirs) <= len(parentPathDirs) { match, _ = pattern.match(strings.Join(parentPathDirs[:len(pattern.dirs)], string(os.PathSeparator))) } } if match { matched = !negative } } return matched, nil } // Exclusions returns true if any of the patterns define exclusions func (pm *PatternMatcher) Exclusions() bool { return pm.exclusions } // Patterns returns array of active patterns func (pm *PatternMatcher) Patterns() []*Pattern { return pm.patterns } // Pattern defines a single regexp used to filter file paths. type Pattern struct { cleanedPattern string dirs []string regexp *regexp.Regexp exclusion bool } func (p *Pattern) String() string { return p.cleanedPattern } // Exclusion returns true if this pattern defines exclusion func (p *Pattern) Exclusion() bool { return p.exclusion } func (p *Pattern) match(path string) (bool, error) { if p.regexp == nil { if err := p.compile(); err != nil { return false, filepath.ErrBadPattern } } b := p.regexp.MatchString(path) return b, nil } func (p *Pattern) compile() error { regStr := "^" pattern := p.cleanedPattern // Go through the pattern and convert it to a regexp. // We use a scanner so we can support utf-8 chars. var scan scanner.Scanner scan.Init(strings.NewReader(pattern)) sl := string(os.PathSeparator) escSL := sl if sl == `\` { escSL += `\` } for scan.Peek() != scanner.EOF { ch := scan.Next() if ch == '*' { if scan.Peek() == '*' { // is some flavor of "**" scan.Next() // Treat **/ as ** so eat the "/" if string(scan.Peek()) == sl { scan.Next() } if scan.Peek() == scanner.EOF { // is "**EOF" - to align with .gitignore just accept all regStr += ".*" } else { // is "**" // Note that this allows for any # of /'s (even 0) because // the .* will eat everything, even /'s regStr += "(.*" + escSL + ")?" } } else { // is "*" so map it to anything but "/" regStr += "[^" + escSL + "]*" } } else if ch == '?' { // "?" is any char except "/" regStr += "[^" + escSL + "]" } else if ch == '.' || ch == '$' { // Escape some regexp special chars that have no meaning // in golang's filepath.Match regStr += `\` + string(ch) } else if ch == '\\' { // escape next char. Note that a trailing \ in the pattern // will be left alone (but need to escape it) if sl == `\` { // On windows map "\" to "\\", meaning an escaped backslash, // and then just continue because filepath.Match on // Windows doesn't allow escaping at all regStr += escSL continue } if scan.Peek() != scanner.EOF { regStr += `\` + string(scan.Next()) } else { regStr += `\` } } else { regStr += string(ch) } } regStr += "$" re, err := regexp.Compile(regStr) if err != nil { return err } p.regexp = re return nil } // Matches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. func Matches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.Matches(file) } // CopyFile copies from src to dst until either EOF is reached // on src or an error occurs. It verifies src exists and removes // the dst if it exists. func CopyFile(src, dst string) (int64, error) { cleanSrc := filepath.Clean(src) cleanDst := filepath.Clean(dst) if cleanSrc == cleanDst { return 0, nil } sf, err := os.Open(cleanSrc) if err != nil { return 0, err } defer sf.Close() if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { return 0, err } df, err := os.Create(cleanDst) if err != nil { return 0, err } defer df.Close() return io.Copy(df, sf) } // ReadSymlinkedDirectory returns the target directory of a symlink. // The target of the symbolic link may not be a file. func ReadSymlinkedDirectory(path string) (string, error) { var realPath string var err error if realPath, err = filepath.Abs(path); err != nil { return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) } if realPath, err = filepath.EvalSymlinks(realPath); err != nil { return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) } realPathInfo, err := os.Stat(realPath) if err != nil { return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) } if !realPathInfo.Mode().IsDir() { return "", fmt.Errorf("canonical path points to a file '%s'", realPath) } return realPath, nil } // CreateIfNotExists creates a file or a directory only if it does not already exist. func CreateIfNotExists(path string, isDir bool) error { if _, err := os.Stat(path); err != nil { if os.IsNotExist(err) { if isDir { return os.MkdirAll(path, 0755) } if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_CREATE, 0755) if err != nil { return err } f.Close() } } return nil }
package fileutils // import "github.com/docker/docker/pkg/fileutils" import ( "errors" "fmt" "io" "os" "path/filepath" "regexp" "strings" "text/scanner" ) // PatternMatcher allows checking paths against a list of patterns type PatternMatcher struct { patterns []*Pattern exclusions bool } // NewPatternMatcher creates a new matcher object for specific patterns that can // be used later to match against patterns against paths func NewPatternMatcher(patterns []string) (*PatternMatcher, error) { pm := &PatternMatcher{ patterns: make([]*Pattern, 0, len(patterns)), } for _, p := range patterns { // Eliminate leading and trailing whitespace. p = strings.TrimSpace(p) if p == "" { continue } p = filepath.Clean(p) newp := &Pattern{} if p[0] == '!' { if len(p) == 1 { return nil, errors.New("illegal exclusion pattern: \"!\"") } newp.exclusion = true p = p[1:] pm.exclusions = true } // Do some syntax checking on the pattern. // filepath's Match() has some really weird rules that are inconsistent // so instead of trying to dup their logic, just call Match() for its // error state and if there is an error in the pattern return it. // If this becomes an issue we can remove this since its really only // needed in the error (syntax) case - which isn't really critical. if _, err := filepath.Match(p, "."); err != nil { return nil, err } newp.cleanedPattern = p newp.dirs = strings.Split(p, string(os.PathSeparator)) pm.patterns = append(pm.patterns, newp) } return pm, nil } // Matches returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // The "file" argument should be a slash-delimited path. // // Matches is not safe to call concurrently. // // This implementation is buggy (it only checks a single parent dir against the // pattern) and will be removed soon. Use either MatchesOrParentMatches or // MatchesUsingParentResult instead. func (pm *PatternMatcher) Matches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. if len(pattern.dirs) <= len(parentPathDirs) { match, _ = pattern.match(strings.Join(parentPathDirs[:len(pattern.dirs)], string(os.PathSeparator))) } } if match { matched = !pattern.exclusion } } return matched, nil } // MatchesOrParentMatches returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // The "file" argument should be a slash-delimited path. // // Matches is not safe to call concurrently. func (pm *PatternMatcher) MatchesOrParentMatches(file string) (bool, error) { matched := false file = filepath.FromSlash(file) parentPath := filepath.Dir(file) parentPathDirs := strings.Split(parentPath, string(os.PathSeparator)) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if !match && parentPath != "." { // Check to see if the pattern matches one of our parent dirs. for i := range parentPathDirs { match, _ = pattern.match(strings.Join(parentPathDirs[:i+1], string(os.PathSeparator))) if match { break } } } if match { matched = !pattern.exclusion } } return matched, nil } // MatchesUsingParentResult returns true if "file" matches any of the patterns // and isn't excluded by any of the subsequent patterns. The functionality is // the same as Matches, but as an optimization, the caller keeps track of // whether the parent directory matched. // // The "file" argument should be a slash-delimited path. // // MatchesUsingParentResult is not safe to call concurrently. func (pm *PatternMatcher) MatchesUsingParentResult(file string, parentMatched bool) (bool, error) { matched := parentMatched file = filepath.FromSlash(file) for _, pattern := range pm.patterns { // Skip evaluation if this is an inclusion and the filename // already matched the pattern, or it's an exclusion and it has // not matched the pattern yet. if pattern.exclusion != matched { continue } match, err := pattern.match(file) if err != nil { return false, err } if match { matched = !pattern.exclusion } } return matched, nil } // Exclusions returns true if any of the patterns define exclusions func (pm *PatternMatcher) Exclusions() bool { return pm.exclusions } // Patterns returns array of active patterns func (pm *PatternMatcher) Patterns() []*Pattern { return pm.patterns } // Pattern defines a single regexp used to filter file paths. type Pattern struct { cleanedPattern string dirs []string regexp *regexp.Regexp exclusion bool } func (p *Pattern) String() string { return p.cleanedPattern } // Exclusion returns true if this pattern defines exclusion func (p *Pattern) Exclusion() bool { return p.exclusion } func (p *Pattern) match(path string) (bool, error) { if p.regexp == nil { if err := p.compile(); err != nil { return false, filepath.ErrBadPattern } } b := p.regexp.MatchString(path) return b, nil } func (p *Pattern) compile() error { regStr := "^" pattern := p.cleanedPattern // Go through the pattern and convert it to a regexp. // We use a scanner so we can support utf-8 chars. var scan scanner.Scanner scan.Init(strings.NewReader(pattern)) sl := string(os.PathSeparator) escSL := sl if sl == `\` { escSL += `\` } for scan.Peek() != scanner.EOF { ch := scan.Next() if ch == '*' { if scan.Peek() == '*' { // is some flavor of "**" scan.Next() // Treat **/ as ** so eat the "/" if string(scan.Peek()) == sl { scan.Next() } if scan.Peek() == scanner.EOF { // is "**EOF" - to align with .gitignore just accept all regStr += ".*" } else { // is "**" // Note that this allows for any # of /'s (even 0) because // the .* will eat everything, even /'s regStr += "(.*" + escSL + ")?" } } else { // is "*" so map it to anything but "/" regStr += "[^" + escSL + "]*" } } else if ch == '?' { // "?" is any char except "/" regStr += "[^" + escSL + "]" } else if ch == '.' || ch == '$' { // Escape some regexp special chars that have no meaning // in golang's filepath.Match regStr += `\` + string(ch) } else if ch == '\\' { // escape next char. Note that a trailing \ in the pattern // will be left alone (but need to escape it) if sl == `\` { // On windows map "\" to "\\", meaning an escaped backslash, // and then just continue because filepath.Match on // Windows doesn't allow escaping at all regStr += escSL continue } if scan.Peek() != scanner.EOF { regStr += `\` + string(scan.Next()) } else { regStr += `\` } } else { regStr += string(ch) } } regStr += "$" re, err := regexp.Compile(regStr) if err != nil { return err } p.regexp = re return nil } // Matches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. // // This implementation is buggy (it only checks a single parent dir against the // pattern) and will be removed soon. Use MatchesOrParentMatches instead. func Matches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.Matches(file) } // MatchesOrParentMatches returns true if file matches any of the patterns // and isn't excluded by any of the subsequent patterns. func MatchesOrParentMatches(file string, patterns []string) (bool, error) { pm, err := NewPatternMatcher(patterns) if err != nil { return false, err } file = filepath.Clean(file) if file == "." { // Don't let them exclude everything, kind of silly. return false, nil } return pm.MatchesOrParentMatches(file) } // CopyFile copies from src to dst until either EOF is reached // on src or an error occurs. It verifies src exists and removes // the dst if it exists. func CopyFile(src, dst string) (int64, error) { cleanSrc := filepath.Clean(src) cleanDst := filepath.Clean(dst) if cleanSrc == cleanDst { return 0, nil } sf, err := os.Open(cleanSrc) if err != nil { return 0, err } defer sf.Close() if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { return 0, err } df, err := os.Create(cleanDst) if err != nil { return 0, err } defer df.Close() return io.Copy(df, sf) } // ReadSymlinkedDirectory returns the target directory of a symlink. // The target of the symbolic link may not be a file. func ReadSymlinkedDirectory(path string) (string, error) { var realPath string var err error if realPath, err = filepath.Abs(path); err != nil { return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) } if realPath, err = filepath.EvalSymlinks(realPath); err != nil { return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) } realPathInfo, err := os.Stat(realPath) if err != nil { return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) } if !realPathInfo.Mode().IsDir() { return "", fmt.Errorf("canonical path points to a file '%s'", realPath) } return realPath, nil } // CreateIfNotExists creates a file or a directory only if it does not already exist. func CreateIfNotExists(path string, isDir bool) error { if _, err := os.Stat(path); err != nil { if os.IsNotExist(err) { if isDir { return os.MkdirAll(path, 0755) } if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_CREATE, 0755) if err != nil { return err } f.Close() } } return nil }
aaronlehmann
385ddf6120629b72ed4d8acce4a7a5f74f81b88e
ba2adeebdb8d4ec442f1d3107405ae55d0c1a0c2
Renamed it. I kept the original (buggy) implementation as well because some vendored code calls it. We can remove it as a second step.
aaronlehmann
4,526
moby/moby
42,674
Dockerfile: simplify utility-install script, and update gotestsum to v1.7.0
### Dockerfile: use version for some utilities instead of commit-sha The golangci-lint, gotestsum, shfmt, and vndr utilities should generally be ok to be pinned by version instead of a specific sha. Also rename the corresponding env-vars / build-args accordingly: - GOLANGCI_LINT_COMMIT -> GOLANGCI_LINT_VERSION - GOTESTSUM_COMMIT -> GOTESTSUM_VERSION - SHFMT_COMMIT -> SHFMT_VERSION - VNDR_COMMIT -> VNDR_VERSION - CONTAINERD_COMMIT -> CONTAINERD_VERSION - RUNC_COMMIT -> RUNC_VERSION - ROOTLESS_COMMIT -> ROOTLESS_VERSION ### Dockerfile: use "go install" to install utilities ### Dockerfile: remove GOPROXY override (was for go < 1.14) ### Dockerfile: update gotestsum to v1.7.0 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-26 13:02:55+00:00
2021-09-02 21:24:14+00:00
hack/dockerfile/install/rootlesskit.installer
#!/bin/sh # v0.14.4 : "${ROOTLESSKIT_COMMIT:=87d443683ac1e8aba4110b8081f15aaae432aaa2}" install_rootlesskit() { case "$1" in "dynamic") install_rootlesskit_dynamic return ;; "") export CGO_ENABLED=0 _install_rootlesskit ;; *) echo 'Usage: $0 [dynamic]' ;; esac } install_rootlesskit_dynamic() { export ROOTLESSKIT_LDFLAGS="-linkmode=external" install_rootlesskit export BUILD_MODE=${GO_BUILDMODE} _install_rootlesskit } _install_rootlesskit() ( echo "Install rootlesskit version $ROOTLESSKIT_COMMIT" git clone https://github.com/rootless-containers/rootlesskit.git "$GOPATH/src/github.com/rootless-containers/rootlesskit" cd "$GOPATH/src/github.com/rootless-containers/rootlesskit" || exit 1 git checkout -q "$ROOTLESSKIT_COMMIT" export GO111MODULE=on # TODO remove GOPROXY override once we updated to Go 1.14+ # Using goproxy instead of "direct" to work around an issue in go mod # on Go 1.13 not working with older git versions (default version on # CentOS 7 is git 1.8), see https://github.com/golang/go/issues/38373 export GOPROXY="https://proxy.golang.org" for f in rootlesskit rootlesskit-docker-proxy; do go build $BUILD_MODE -ldflags="$ROOTLESSKIT_LDFLAGS" -o "${PREFIX}/$f" github.com/rootless-containers/rootlesskit/cmd/$f done )
#!/bin/sh : "${ROOTLESSKIT_VERSION:=v0.14.4}" install_rootlesskit() { case "$1" in "dynamic") install_rootlesskit_dynamic return ;; "") export CGO_ENABLED=0 _install_rootlesskit ;; *) echo 'Usage: $0 [dynamic]' ;; esac } install_rootlesskit_dynamic() { export ROOTLESSKIT_LDFLAGS="-linkmode=external" install_rootlesskit export BUILD_MODE=${GO_BUILDMODE} _install_rootlesskit } _install_rootlesskit() ( echo "Install rootlesskit version ${ROOTLESSKIT_VERSION}" for f in rootlesskit rootlesskit-docker-proxy; do GOBIN="${PREFIX}" GO111MODULE=on go install ${BUILD_MODE} -ldflags="$ROOTLESSKIT_LDFLAGS" "github.com/rootless-containers/rootlesskit/cmd/${f}@${ROOTLESSKIT_VERSION}" done )
thaJeztah
772e25fa9f00577ba9f6641530e5aad5ec5ff84c
5176095455642c30642efacf6f35afc7c6dede92
Looks like this issue was not fixed in Go 1.14. go modules still doesn't work (without using the proxy) on CentOS 7 / older git versions; https://github.com/docker/docker-ce-packaging/pull/553#issuecomment-913410815
thaJeztah
4,527
moby/moby
42,674
Dockerfile: simplify utility-install script, and update gotestsum to v1.7.0
### Dockerfile: use version for some utilities instead of commit-sha The golangci-lint, gotestsum, shfmt, and vndr utilities should generally be ok to be pinned by version instead of a specific sha. Also rename the corresponding env-vars / build-args accordingly: - GOLANGCI_LINT_COMMIT -> GOLANGCI_LINT_VERSION - GOTESTSUM_COMMIT -> GOTESTSUM_VERSION - SHFMT_COMMIT -> SHFMT_VERSION - VNDR_COMMIT -> VNDR_VERSION - CONTAINERD_COMMIT -> CONTAINERD_VERSION - RUNC_COMMIT -> RUNC_VERSION - ROOTLESS_COMMIT -> ROOTLESS_VERSION ### Dockerfile: use "go install" to install utilities ### Dockerfile: remove GOPROXY override (was for go < 1.14) ### Dockerfile: update gotestsum to v1.7.0 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-26 13:02:55+00:00
2021-09-02 21:24:14+00:00
hack/dockerfile/install/runc.installer
#!/bin/sh # When updating RUNC_COMMIT, also update runc in vendor.conf accordingly # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. : ${RUNC_COMMIT:=52b36a2dd837e8462de8e01458bf02cf9eea47dd} # v1.0.2 install_runc() { # If using RHEL7 kernels (3.10.0 el7), disable kmem accounting/limiting if uname -r | grep -q '^3\.10\.0.*\.el7\.'; then : ${RUNC_NOKMEM='nokmem'} fi # Do not build with ambient capabilities support RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp $RUNC_NOKMEM"}" echo "Install runc version $RUNC_COMMIT (build tags: $RUNC_BUILDTAGS)" git clone https://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" cd "$GOPATH/src/github.com/opencontainers/runc" git checkout -q "$RUNC_COMMIT" if [ -z "$1" ]; then target=static else target="$1" fi make BUILDTAGS="$RUNC_BUILDTAGS" "$target" mkdir -p "${PREFIX}" cp runc "${PREFIX}/runc" }
#!/bin/sh set -e # RUNC_VERSION specifies the version of runc to install from the # https://github.com/opencontainers/runc repository. # # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # # When updating RUNC_VERSION, consider updating runc in vendor.conf accordingly : "${RUNC_VERSION:=v1.0.2}" install_runc() { RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp"}" echo "Install runc version $RUNC_VERSION (build tags: $RUNC_BUILDTAGS)" git clone https://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" cd "$GOPATH/src/github.com/opencontainers/runc" git checkout -q "$RUNC_VERSION" if [ -z "$1" ]; then target=static else target="$1" fi make BUILDTAGS="$RUNC_BUILDTAGS" "$target" mkdir -p "${PREFIX}" cp runc "${PREFIX}/runc" }
thaJeztah
772e25fa9f00577ba9f6641530e5aad5ec5ff84c
5176095455642c30642efacf6f35afc7c6dede92
I think we can also consider removing `RUNC_BUILDTAGS` now (assuming that ` make static` does the right thing w.r.t. `seccomp` 🤔
thaJeztah
4,528
moby/moby
42,674
Dockerfile: simplify utility-install script, and update gotestsum to v1.7.0
### Dockerfile: use version for some utilities instead of commit-sha The golangci-lint, gotestsum, shfmt, and vndr utilities should generally be ok to be pinned by version instead of a specific sha. Also rename the corresponding env-vars / build-args accordingly: - GOLANGCI_LINT_COMMIT -> GOLANGCI_LINT_VERSION - GOTESTSUM_COMMIT -> GOTESTSUM_VERSION - SHFMT_COMMIT -> SHFMT_VERSION - VNDR_COMMIT -> VNDR_VERSION - CONTAINERD_COMMIT -> CONTAINERD_VERSION - RUNC_COMMIT -> RUNC_VERSION - ROOTLESS_COMMIT -> ROOTLESS_VERSION ### Dockerfile: use "go install" to install utilities ### Dockerfile: remove GOPROXY override (was for go < 1.14) ### Dockerfile: update gotestsum to v1.7.0 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-26 13:02:55+00:00
2021-09-02 21:24:14+00:00
hack/dockerfile/install/runc.installer
#!/bin/sh # When updating RUNC_COMMIT, also update runc in vendor.conf accordingly # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. : ${RUNC_COMMIT:=52b36a2dd837e8462de8e01458bf02cf9eea47dd} # v1.0.2 install_runc() { # If using RHEL7 kernels (3.10.0 el7), disable kmem accounting/limiting if uname -r | grep -q '^3\.10\.0.*\.el7\.'; then : ${RUNC_NOKMEM='nokmem'} fi # Do not build with ambient capabilities support RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp $RUNC_NOKMEM"}" echo "Install runc version $RUNC_COMMIT (build tags: $RUNC_BUILDTAGS)" git clone https://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" cd "$GOPATH/src/github.com/opencontainers/runc" git checkout -q "$RUNC_COMMIT" if [ -z "$1" ]; then target=static else target="$1" fi make BUILDTAGS="$RUNC_BUILDTAGS" "$target" mkdir -p "${PREFIX}" cp runc "${PREFIX}/runc" }
#!/bin/sh set -e # RUNC_VERSION specifies the version of runc to install from the # https://github.com/opencontainers/runc repository. # # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # # When updating RUNC_VERSION, consider updating runc in vendor.conf accordingly : "${RUNC_VERSION:=v1.0.2}" install_runc() { RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp"}" echo "Install runc version $RUNC_VERSION (build tags: $RUNC_BUILDTAGS)" git clone https://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" cd "$GOPATH/src/github.com/opencontainers/runc" git checkout -q "$RUNC_VERSION" if [ -z "$1" ]; then target=static else target="$1" fi make BUILDTAGS="$RUNC_BUILDTAGS" "$target" mkdir -p "${PREFIX}" cp runc "${PREFIX}/runc" }
thaJeztah
772e25fa9f00577ba9f6641530e5aad5ec5ff84c
5176095455642c30642efacf6f35afc7c6dede92
changing this for `-e` instead
thaJeztah
4,529
moby/moby
42,674
Dockerfile: simplify utility-install script, and update gotestsum to v1.7.0
### Dockerfile: use version for some utilities instead of commit-sha The golangci-lint, gotestsum, shfmt, and vndr utilities should generally be ok to be pinned by version instead of a specific sha. Also rename the corresponding env-vars / build-args accordingly: - GOLANGCI_LINT_COMMIT -> GOLANGCI_LINT_VERSION - GOTESTSUM_COMMIT -> GOTESTSUM_VERSION - SHFMT_COMMIT -> SHFMT_VERSION - VNDR_COMMIT -> VNDR_VERSION - CONTAINERD_COMMIT -> CONTAINERD_VERSION - RUNC_COMMIT -> RUNC_VERSION - ROOTLESS_COMMIT -> ROOTLESS_VERSION ### Dockerfile: use "go install" to install utilities ### Dockerfile: remove GOPROXY override (was for go < 1.14) ### Dockerfile: update gotestsum to v1.7.0 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-26 13:02:55+00:00
2021-09-02 21:24:14+00:00
hack/dockerfile/install/tomll.installer
#!/bin/sh : "${GOTOML_VERSION:=v1.8.1}" install_tomll() { echo "Install go-toml version ${GOTOML_VERSION}" # TODO remove GO111MODULE=on and change to 'go install -mod=mod ...' once we're at go 1.16+ GO111MODULE=on GOBIN="${PREFIX}" go get -v "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" }
#!/bin/sh : "${GOTOML_VERSION:=v1.8.1}" install_tomll() { echo "Install go-toml version ${GOTOML_VERSION}" GOBIN="${PREFIX}" GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" }
thaJeztah
772e25fa9f00577ba9f6641530e5aad5ec5ff84c
5176095455642c30642efacf6f35afc7c6dede92
Wondering if we should just remove these scripts now, and inline this in the Dockerfile. WDYT @cpuguy83 @tianon ?
thaJeztah
4,530
moby/moby
42,674
Dockerfile: simplify utility-install script, and update gotestsum to v1.7.0
### Dockerfile: use version for some utilities instead of commit-sha The golangci-lint, gotestsum, shfmt, and vndr utilities should generally be ok to be pinned by version instead of a specific sha. Also rename the corresponding env-vars / build-args accordingly: - GOLANGCI_LINT_COMMIT -> GOLANGCI_LINT_VERSION - GOTESTSUM_COMMIT -> GOTESTSUM_VERSION - SHFMT_COMMIT -> SHFMT_VERSION - VNDR_COMMIT -> VNDR_VERSION - CONTAINERD_COMMIT -> CONTAINERD_VERSION - RUNC_COMMIT -> RUNC_VERSION - ROOTLESS_COMMIT -> ROOTLESS_VERSION ### Dockerfile: use "go install" to install utilities ### Dockerfile: remove GOPROXY override (was for go < 1.14) ### Dockerfile: update gotestsum to v1.7.0 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-26 13:02:55+00:00
2021-09-02 21:24:14+00:00
hack/dockerfile/install/tomll.installer
#!/bin/sh : "${GOTOML_VERSION:=v1.8.1}" install_tomll() { echo "Install go-toml version ${GOTOML_VERSION}" # TODO remove GO111MODULE=on and change to 'go install -mod=mod ...' once we're at go 1.16+ GO111MODULE=on GOBIN="${PREFIX}" go get -v "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" }
#!/bin/sh : "${GOTOML_VERSION:=v1.8.1}" install_tomll() { echo "Install go-toml version ${GOTOML_VERSION}" GOBIN="${PREFIX}" GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" }
thaJeztah
772e25fa9f00577ba9f6641530e5aad5ec5ff84c
5176095455642c30642efacf6f35afc7c6dede92
^^ I started working on this, but it was slightly more involved, so I'll push those changes in a follow-up.
thaJeztah
4,531
moby/moby
42,674
Dockerfile: simplify utility-install script, and update gotestsum to v1.7.0
### Dockerfile: use version for some utilities instead of commit-sha The golangci-lint, gotestsum, shfmt, and vndr utilities should generally be ok to be pinned by version instead of a specific sha. Also rename the corresponding env-vars / build-args accordingly: - GOLANGCI_LINT_COMMIT -> GOLANGCI_LINT_VERSION - GOTESTSUM_COMMIT -> GOTESTSUM_VERSION - SHFMT_COMMIT -> SHFMT_VERSION - VNDR_COMMIT -> VNDR_VERSION - CONTAINERD_COMMIT -> CONTAINERD_VERSION - RUNC_COMMIT -> RUNC_VERSION - ROOTLESS_COMMIT -> ROOTLESS_VERSION ### Dockerfile: use "go install" to install utilities ### Dockerfile: remove GOPROXY override (was for go < 1.14) ### Dockerfile: update gotestsum to v1.7.0 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-26 13:02:55+00:00
2021-09-02 21:24:14+00:00
hack/dockerfile/install/tomll.installer
#!/bin/sh : "${GOTOML_VERSION:=v1.8.1}" install_tomll() { echo "Install go-toml version ${GOTOML_VERSION}" # TODO remove GO111MODULE=on and change to 'go install -mod=mod ...' once we're at go 1.16+ GO111MODULE=on GOBIN="${PREFIX}" go get -v "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" }
#!/bin/sh : "${GOTOML_VERSION:=v1.8.1}" install_tomll() { echo "Install go-toml version ${GOTOML_VERSION}" GOBIN="${PREFIX}" GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" }
thaJeztah
772e25fa9f00577ba9f6641530e5aad5ec5ff84c
5176095455642c30642efacf6f35afc7c6dede92
You mean inline just the extra tools (like `tomll`), or all the `containerd`, etc too? I'm +1 on the former but hesitant on the latter (since I think those scripts get used elsewhere too right?)
tianon
4,532
moby/moby
42,674
Dockerfile: simplify utility-install script, and update gotestsum to v1.7.0
### Dockerfile: use version for some utilities instead of commit-sha The golangci-lint, gotestsum, shfmt, and vndr utilities should generally be ok to be pinned by version instead of a specific sha. Also rename the corresponding env-vars / build-args accordingly: - GOLANGCI_LINT_COMMIT -> GOLANGCI_LINT_VERSION - GOTESTSUM_COMMIT -> GOTESTSUM_VERSION - SHFMT_COMMIT -> SHFMT_VERSION - VNDR_COMMIT -> VNDR_VERSION - CONTAINERD_COMMIT -> CONTAINERD_VERSION - RUNC_COMMIT -> RUNC_VERSION - ROOTLESS_COMMIT -> ROOTLESS_VERSION ### Dockerfile: use "go install" to install utilities ### Dockerfile: remove GOPROXY override (was for go < 1.14) ### Dockerfile: update gotestsum to v1.7.0 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-26 13:02:55+00:00
2021-09-02 21:24:14+00:00
hack/dockerfile/install/tomll.installer
#!/bin/sh : "${GOTOML_VERSION:=v1.8.1}" install_tomll() { echo "Install go-toml version ${GOTOML_VERSION}" # TODO remove GO111MODULE=on and change to 'go install -mod=mod ...' once we're at go 1.16+ GO111MODULE=on GOBIN="${PREFIX}" go get -v "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" }
#!/bin/sh : "${GOTOML_VERSION:=v1.8.1}" install_tomll() { echo "Install go-toml version ${GOTOML_VERSION}" GOBIN="${PREFIX}" GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" }
thaJeztah
772e25fa9f00577ba9f6641530e5aad5ec5ff84c
5176095455642c30642efacf6f35afc7c6dede92
Yes, correct, the containerd and runc scripts are still used in some other places, so we likely still need those. For the "CI utilities", I think we should be able to put them inside the dockerfile; they're just one-liners now, so having two scripts to run them is just unneeded overhead.
thaJeztah
4,533
moby/moby
42,661
storage-driver: promote overlay2, make Btrfs and ZFS opt-in
The daemon uses a priority list to automatically select the best-matching storage driver for the backing filesystem that is used. Historically, overlay2 was not supported on Btrfs and ZFS, and the daemon would automatically pick the `btrfs` or `zfs` storage driver if that was the Backing File System. Commits 649e4c88899878c9cdf9036f6bc7d62e2b39c04b and e226aea280efdc18f887e853b5eefd763b602937 (https://github.com/moby/moby/pull/40210) improved our detection to check if overlay2 was supported on the backing file- system, allowing overlay2 to be used on top of Btrfs or ZFS, but did not change the priority list. While both Btrfs and ZFS have advantages for certain use-cases, and provide advanced features that are not available to overlay2, they also are known to require more "handholding", and are generally considered to be mostly useful for "advanced" users. This patch changes the storage-driver priority list, to prefer overlay2 (if supported by the backing filesystem), and effectively makes btrfs and zfs opt-in storage drivers. This change does not affect existing installations; the daemon will detect the storage driver that was previously in use (based on the presence of storage directories in `/var/lib/docker`). **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-21 12:48:22+00:00
2021-07-23 09:37:01+00:00
daemon/graphdriver/driver_linux.go
package graphdriver // import "github.com/docker/docker/daemon/graphdriver" import ( "github.com/moby/sys/mountinfo" "golang.org/x/sys/unix" ) const ( // FsMagicAufs filesystem id for Aufs FsMagicAufs = FsMagic(0x61756673) // FsMagicBtrfs filesystem id for Btrfs FsMagicBtrfs = FsMagic(0x9123683E) // FsMagicCramfs filesystem id for Cramfs FsMagicCramfs = FsMagic(0x28cd3d45) // FsMagicEcryptfs filesystem id for eCryptfs FsMagicEcryptfs = FsMagic(0xf15f) // FsMagicExtfs filesystem id for Extfs FsMagicExtfs = FsMagic(0x0000EF53) // FsMagicF2fs filesystem id for F2fs FsMagicF2fs = FsMagic(0xF2F52010) // FsMagicGPFS filesystem id for GPFS FsMagicGPFS = FsMagic(0x47504653) // FsMagicJffs2Fs filesystem if for Jffs2Fs FsMagicJffs2Fs = FsMagic(0x000072b6) // FsMagicJfs filesystem id for Jfs FsMagicJfs = FsMagic(0x3153464a) // FsMagicNfsFs filesystem id for NfsFs FsMagicNfsFs = FsMagic(0x00006969) // FsMagicRAMFs filesystem id for RamFs FsMagicRAMFs = FsMagic(0x858458f6) // FsMagicReiserFs filesystem id for ReiserFs FsMagicReiserFs = FsMagic(0x52654973) // FsMagicSmbFs filesystem id for SmbFs FsMagicSmbFs = FsMagic(0x0000517B) // FsMagicSquashFs filesystem id for SquashFs FsMagicSquashFs = FsMagic(0x73717368) // FsMagicTmpFs filesystem id for TmpFs FsMagicTmpFs = FsMagic(0x01021994) // FsMagicVxFS filesystem id for VxFs FsMagicVxFS = FsMagic(0xa501fcf5) // FsMagicXfs filesystem id for Xfs FsMagicXfs = FsMagic(0x58465342) // FsMagicZfs filesystem id for Zfs FsMagicZfs = FsMagic(0x2fc12fc1) // FsMagicOverlay filesystem id for overlay FsMagicOverlay = FsMagic(0x794C7630) // FsMagicFUSE filesystem id for FUSE FsMagicFUSE = FsMagic(0x65735546) ) var ( // List of drivers that should be used in an order priority = "btrfs,zfs,overlay2,fuse-overlayfs,aufs,overlay,devicemapper,vfs" // FsNames maps filesystem id to name of the filesystem. FsNames = map[FsMagic]string{ FsMagicAufs: "aufs", FsMagicBtrfs: "btrfs", FsMagicCramfs: "cramfs", FsMagicEcryptfs: "ecryptfs", FsMagicExtfs: "extfs", FsMagicF2fs: "f2fs", FsMagicFUSE: "fuse", FsMagicGPFS: "gpfs", FsMagicJffs2Fs: "jffs2", FsMagicJfs: "jfs", FsMagicNfsFs: "nfs", FsMagicOverlay: "overlayfs", FsMagicRAMFs: "ramfs", FsMagicReiserFs: "reiserfs", FsMagicSmbFs: "smb", FsMagicSquashFs: "squashfs", FsMagicTmpFs: "tmpfs", FsMagicUnsupported: "unsupported", FsMagicVxFS: "vxfs", FsMagicXfs: "xfs", FsMagicZfs: "zfs", } ) // GetFSMagic returns the filesystem id given the path. func GetFSMagic(rootpath string) (FsMagic, error) { var buf unix.Statfs_t if err := unix.Statfs(rootpath, &buf); err != nil { return 0, err } return FsMagic(buf.Type), nil } // NewFsChecker returns a checker configured for the provided FsMagic func NewFsChecker(t FsMagic) Checker { return &fsChecker{ t: t, } } type fsChecker struct { t FsMagic } func (c *fsChecker) IsMounted(path string) bool { m, _ := Mounted(c.t, path) return m } // NewDefaultChecker returns a check that parses /proc/mountinfo to check // if the specified path is mounted. func NewDefaultChecker() Checker { return &defaultChecker{} } type defaultChecker struct { } func (c *defaultChecker) IsMounted(path string) bool { m, _ := mountinfo.Mounted(path) return m } // Mounted checks if the given path is mounted as the fs type func Mounted(fsType FsMagic, mountPath string) (bool, error) { var buf unix.Statfs_t if err := unix.Statfs(mountPath, &buf); err != nil { if err == unix.ENOENT { // not exist, thus not mounted err = nil } return false, err } return FsMagic(buf.Type) == fsType, nil }
package graphdriver // import "github.com/docker/docker/daemon/graphdriver" import ( "github.com/moby/sys/mountinfo" "golang.org/x/sys/unix" ) const ( // FsMagicAufs filesystem id for Aufs FsMagicAufs = FsMagic(0x61756673) // FsMagicBtrfs filesystem id for Btrfs FsMagicBtrfs = FsMagic(0x9123683E) // FsMagicCramfs filesystem id for Cramfs FsMagicCramfs = FsMagic(0x28cd3d45) // FsMagicEcryptfs filesystem id for eCryptfs FsMagicEcryptfs = FsMagic(0xf15f) // FsMagicExtfs filesystem id for Extfs FsMagicExtfs = FsMagic(0x0000EF53) // FsMagicF2fs filesystem id for F2fs FsMagicF2fs = FsMagic(0xF2F52010) // FsMagicGPFS filesystem id for GPFS FsMagicGPFS = FsMagic(0x47504653) // FsMagicJffs2Fs filesystem if for Jffs2Fs FsMagicJffs2Fs = FsMagic(0x000072b6) // FsMagicJfs filesystem id for Jfs FsMagicJfs = FsMagic(0x3153464a) // FsMagicNfsFs filesystem id for NfsFs FsMagicNfsFs = FsMagic(0x00006969) // FsMagicRAMFs filesystem id for RamFs FsMagicRAMFs = FsMagic(0x858458f6) // FsMagicReiserFs filesystem id for ReiserFs FsMagicReiserFs = FsMagic(0x52654973) // FsMagicSmbFs filesystem id for SmbFs FsMagicSmbFs = FsMagic(0x0000517B) // FsMagicSquashFs filesystem id for SquashFs FsMagicSquashFs = FsMagic(0x73717368) // FsMagicTmpFs filesystem id for TmpFs FsMagicTmpFs = FsMagic(0x01021994) // FsMagicVxFS filesystem id for VxFs FsMagicVxFS = FsMagic(0xa501fcf5) // FsMagicXfs filesystem id for Xfs FsMagicXfs = FsMagic(0x58465342) // FsMagicZfs filesystem id for Zfs FsMagicZfs = FsMagic(0x2fc12fc1) // FsMagicOverlay filesystem id for overlay FsMagicOverlay = FsMagic(0x794C7630) // FsMagicFUSE filesystem id for FUSE FsMagicFUSE = FsMagic(0x65735546) ) var ( // List of drivers that should be used in an order priority = "overlay2,fuse-overlayfs,btrfs,zfs,aufs,overlay,devicemapper,vfs" // FsNames maps filesystem id to name of the filesystem. FsNames = map[FsMagic]string{ FsMagicAufs: "aufs", FsMagicBtrfs: "btrfs", FsMagicCramfs: "cramfs", FsMagicEcryptfs: "ecryptfs", FsMagicExtfs: "extfs", FsMagicF2fs: "f2fs", FsMagicFUSE: "fuse", FsMagicGPFS: "gpfs", FsMagicJffs2Fs: "jffs2", FsMagicJfs: "jfs", FsMagicNfsFs: "nfs", FsMagicOverlay: "overlayfs", FsMagicRAMFs: "ramfs", FsMagicReiserFs: "reiserfs", FsMagicSmbFs: "smb", FsMagicSquashFs: "squashfs", FsMagicTmpFs: "tmpfs", FsMagicUnsupported: "unsupported", FsMagicVxFS: "vxfs", FsMagicXfs: "xfs", FsMagicZfs: "zfs", } ) // GetFSMagic returns the filesystem id given the path. func GetFSMagic(rootpath string) (FsMagic, error) { var buf unix.Statfs_t if err := unix.Statfs(rootpath, &buf); err != nil { return 0, err } return FsMagic(buf.Type), nil } // NewFsChecker returns a checker configured for the provided FsMagic func NewFsChecker(t FsMagic) Checker { return &fsChecker{ t: t, } } type fsChecker struct { t FsMagic } func (c *fsChecker) IsMounted(path string) bool { m, _ := Mounted(c.t, path) return m } // NewDefaultChecker returns a check that parses /proc/mountinfo to check // if the specified path is mounted. func NewDefaultChecker() Checker { return &defaultChecker{} } type defaultChecker struct { } func (c *defaultChecker) IsMounted(path string) bool { m, _ := mountinfo.Mounted(path) return m } // Mounted checks if the given path is mounted as the fs type func Mounted(fsType FsMagic, mountPath string) (bool, error) { var buf unix.Statfs_t if err := unix.Statfs(mountPath, &buf); err != nil { if err == unix.ENOENT { // not exist, thus not mounted err = nil } return false, err } return FsMagic(buf.Type) == fsType, nil }
thaJeztah
471fd27709777d2cce3251129887e14e8bb2e0c7
6317d7467a858de28531516fac75d1b230d024dd
@AkihiroSuda ptal; I wasn't sure if btrfs/zfs should be tried _before_ or _after_ `fuse-overlayfs` (I _think_ this is the most logical order, but let me know if not)
thaJeztah
4,534
moby/moby
42,656
Update containerd v1.5.4
Update to containerd v1.5.4 to address [CVE-2021-32760][1]. [1]: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32760 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-19 19:18:39+00:00
2021-07-20 11:04:56+00:00
vendor.conf
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, and github.com/moby/sys/symlink modules. Our vendoring # tool (vndr) currently does not support submodules / vendoring sub-paths, so we vendor # the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys b0f1fd7235275d01bd35cc4421e884e522395f45 # mountinfo/v0.4.1 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 0e8719f54c6dc6571fc1170da75a85e86c17636b # v1.5.3 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, and github.com/moby/sys/symlink modules. Our vendoring # tool (vndr) currently does not support submodules / vendoring sub-paths, so we vendor # the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys b0f1fd7235275d01bd35cc4421e884e522395f45 # mountinfo/v0.4.1 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
thaJeztah
9a6ff685a80639995311c0572d662360b55796d9
471fd27709777d2cce3251129887e14e8bb2e0c7
Comment is incorrect.
cpuguy83
4,535
moby/moby
42,656
Update containerd v1.5.4
Update to containerd v1.5.4 to address [CVE-2021-32760][1]. [1]: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32760 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-19 19:18:39+00:00
2021-07-20 11:04:56+00:00
vendor.conf
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, and github.com/moby/sys/symlink modules. Our vendoring # tool (vndr) currently does not support submodules / vendoring sub-paths, so we vendor # the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys b0f1fd7235275d01bd35cc4421e884e522395f45 # mountinfo/v0.4.1 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 0e8719f54c6dc6571fc1170da75a85e86c17636b # v1.5.3 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, and github.com/moby/sys/symlink modules. Our vendoring # tool (vndr) currently does not support submodules / vendoring sub-paths, so we vendor # the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys b0f1fd7235275d01bd35cc4421e884e522395f45 # mountinfo/v0.4.1 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
thaJeztah
9a6ff685a80639995311c0572d662360b55796d9
471fd27709777d2cce3251129887e14e8bb2e0c7
Whoops. thx; fixed!
thaJeztah
4,536
moby/moby
42,649
seccomp: Use explicit DefaultErrnoRet
Since commit "seccomp: Sync fields with runtime-spec fields" (5d244675bdb23e8fce427036c03517243f344cd4) we support to specify the DefaultErrnoRet to be used. Before that commit it was not specified and EPERM was used by default. This commit keeps the same behaviour but just makes it explicit that the default is EPERM. Signed-off-by: Rodrigo Campos <[email protected]> This a follow-up of https://github.com/moby/moby/pull/42604#issuecomment-876632771, as suggested by @thaJeztah. This just adds an explicit default to EPERM. Right now the default is EPERM but is implicit. <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Added the field `DefaultErrnoRet` to seccomp profiles **- How I did it** Added the field to default profile in `profiles/seccomp/default_linux.go`, then adjusted the tests to expect this new field **- How to verify it** You can run unit tests with: `TESTDIRS='github.com/docker/docker/profiles/seccomp' make test-unit`. You can also run `hack/validate/default-seccomp` to verify nothing was missing in the changes. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> Add an explicit DefaultErrnoRet field in the default seccomp profile. No behavior change. **- A picture of a cute animal (not mandatory but encouraged)** ![](https://www.kinderundjugendmedien.de/images/Ratatouille_pixar.jpg)
null
2021-07-16 16:36:59+00:00
2021-08-03 13:13:42+00:00
profiles/seccomp/default_linux.go
// +build seccomp package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "github.com/opencontainers/runtime-spec/specs-go" "golang.org/x/sys/unix" ) func arches() []Architecture { return []Architecture{ { Arch: specs.ArchX86_64, SubArches: []specs.Arch{specs.ArchX86, specs.ArchX32}, }, { Arch: specs.ArchAARCH64, SubArches: []specs.Arch{specs.ArchARM}, }, { Arch: specs.ArchMIPS64, SubArches: []specs.Arch{specs.ArchMIPS, specs.ArchMIPS64N32}, }, { Arch: specs.ArchMIPS64N32, SubArches: []specs.Arch{specs.ArchMIPS, specs.ArchMIPS64}, }, { Arch: specs.ArchMIPSEL64, SubArches: []specs.Arch{specs.ArchMIPSEL, specs.ArchMIPSEL64N32}, }, { Arch: specs.ArchMIPSEL64N32, SubArches: []specs.Arch{specs.ArchMIPSEL, specs.ArchMIPSEL64}, }, { Arch: specs.ArchS390X, SubArches: []specs.Arch{specs.ArchS390}, }, } } // DefaultProfile defines the allowed syscalls for the default seccomp profile. func DefaultProfile() *Seccomp { nosys := uint(unix.ENOSYS) syscalls := []*Syscall{ { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "accept", "accept4", "access", "adjtimex", "alarm", "bind", "brk", "capget", "capset", "chdir", "chmod", "chown", "chown32", "clock_adjtime", "clock_adjtime64", "clock_getres", "clock_getres_time64", "clock_gettime", "clock_gettime64", "clock_nanosleep", "clock_nanosleep_time64", "close", "close_range", "connect", "copy_file_range", "creat", "dup", "dup2", "dup3", "epoll_create", "epoll_create1", "epoll_ctl", "epoll_ctl_old", "epoll_pwait", "epoll_pwait2", "epoll_wait", "epoll_wait_old", "eventfd", "eventfd2", "execve", "execveat", "exit", "exit_group", "faccessat", "faccessat2", "fadvise64", "fadvise64_64", "fallocate", "fanotify_mark", "fchdir", "fchmod", "fchmodat", "fchown", "fchown32", "fchownat", "fcntl", "fcntl64", "fdatasync", "fgetxattr", "flistxattr", "flock", "fork", "fremovexattr", "fsetxattr", "fstat", "fstat64", "fstatat64", "fstatfs", "fstatfs64", "fsync", "ftruncate", "ftruncate64", "futex", "futex_time64", "futimesat", "getcpu", "getcwd", "getdents", "getdents64", "getegid", "getegid32", "geteuid", "geteuid32", "getgid", "getgid32", "getgroups", "getgroups32", "getitimer", "getpeername", "getpgid", "getpgrp", "getpid", "getppid", "getpriority", "getrandom", "getresgid", "getresgid32", "getresuid", "getresuid32", "getrlimit", "get_robust_list", "getrusage", "getsid", "getsockname", "getsockopt", "get_thread_area", "gettid", "gettimeofday", "getuid", "getuid32", "getxattr", "inotify_add_watch", "inotify_init", "inotify_init1", "inotify_rm_watch", "io_cancel", "ioctl", "io_destroy", "io_getevents", "io_pgetevents", "io_pgetevents_time64", "ioprio_get", "ioprio_set", "io_setup", "io_submit", "io_uring_enter", "io_uring_register", "io_uring_setup", "ipc", "kill", "lchown", "lchown32", "lgetxattr", "link", "linkat", "listen", "listxattr", "llistxattr", "_llseek", "lremovexattr", "lseek", "lsetxattr", "lstat", "lstat64", "madvise", "membarrier", "memfd_create", "mincore", "mkdir", "mkdirat", "mknod", "mknodat", "mlock", "mlock2", "mlockall", "mmap", "mmap2", "mprotect", "mq_getsetattr", "mq_notify", "mq_open", "mq_timedreceive", "mq_timedreceive_time64", "mq_timedsend", "mq_timedsend_time64", "mq_unlink", "mremap", "msgctl", "msgget", "msgrcv", "msgsnd", "msync", "munlock", "munlockall", "munmap", "nanosleep", "newfstatat", "_newselect", "open", "openat", "openat2", "pause", "pidfd_open", "pidfd_send_signal", "pipe", "pipe2", "poll", "ppoll", "ppoll_time64", "prctl", "pread64", "preadv", "preadv2", "prlimit64", "pselect6", "pselect6_time64", "pwrite64", "pwritev", "pwritev2", "read", "readahead", "readlink", "readlinkat", "readv", "recv", "recvfrom", "recvmmsg", "recvmmsg_time64", "recvmsg", "remap_file_pages", "removexattr", "rename", "renameat", "renameat2", "restart_syscall", "rmdir", "rseq", "rt_sigaction", "rt_sigpending", "rt_sigprocmask", "rt_sigqueueinfo", "rt_sigreturn", "rt_sigsuspend", "rt_sigtimedwait", "rt_sigtimedwait_time64", "rt_tgsigqueueinfo", "sched_getaffinity", "sched_getattr", "sched_getparam", "sched_get_priority_max", "sched_get_priority_min", "sched_getscheduler", "sched_rr_get_interval", "sched_rr_get_interval_time64", "sched_setaffinity", "sched_setattr", "sched_setparam", "sched_setscheduler", "sched_yield", "seccomp", "select", "semctl", "semget", "semop", "semtimedop", "semtimedop_time64", "send", "sendfile", "sendfile64", "sendmmsg", "sendmsg", "sendto", "setfsgid", "setfsgid32", "setfsuid", "setfsuid32", "setgid", "setgid32", "setgroups", "setgroups32", "setitimer", "setpgid", "setpriority", "setregid", "setregid32", "setresgid", "setresgid32", "setresuid", "setresuid32", "setreuid", "setreuid32", "setrlimit", "set_robust_list", "setsid", "setsockopt", "set_thread_area", "set_tid_address", "setuid", "setuid32", "setxattr", "shmat", "shmctl", "shmdt", "shmget", "shutdown", "sigaltstack", "signalfd", "signalfd4", "sigprocmask", "sigreturn", "socket", "socketcall", "socketpair", "splice", "stat", "stat64", "statfs", "statfs64", "statx", "symlink", "symlinkat", "sync", "sync_file_range", "syncfs", "sysinfo", "tee", "tgkill", "time", "timer_create", "timer_delete", "timer_getoverrun", "timer_gettime", "timer_gettime64", "timer_settime", "timer_settime64", "timerfd_create", "timerfd_gettime", "timerfd_gettime64", "timerfd_settime", "timerfd_settime64", "times", "tkill", "truncate", "truncate64", "ugetrlimit", "umask", "uname", "unlink", "unlinkat", "utime", "utimensat", "utimensat_time64", "utimes", "vfork", "vmsplice", "wait4", "waitid", "waitpid", "write", "writev", }, Action: specs.ActAllow, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, }, Includes: &Filter{ MinKernel: &KernelVersion{4, 8}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x0, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x0008, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x20000, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x20008, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0xffffffff, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "sync_file_range2", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"ppc64le"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "arm_fadvise64_64", "arm_sync_file_range", "sync_file_range2", "breakpoint", "cacheflush", "set_tls", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"arm", "arm64"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "arch_prctl", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"amd64", "x32"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "modify_ldt", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"amd64", "x32", "x86"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "s390_pci_mmio_read", "s390_pci_mmio_write", "s390_runtime_instr", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"s390", "s390x"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "open_by_handle_at", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_DAC_READ_SEARCH"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "bpf", "clone", "clone3", "fanotify_init", "fsconfig", "fsmount", "fsopen", "fspick", "lookup_dcookie", "mount", "move_mount", "name_to_handle_at", "open_tree", "perf_event_open", "quotactl", "setdomainname", "sethostname", "setns", "syslog", "umount", "umount2", "unshare", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: unix.CLONE_NEWNS | unix.CLONE_NEWUTS | unix.CLONE_NEWIPC | unix.CLONE_NEWUSER | unix.CLONE_NEWPID | unix.CLONE_NEWNET | unix.CLONE_NEWCGROUP, ValueTwo: 0, Op: specs.OpMaskedEqual, }, }, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, Arches: []string{"s390", "s390x"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 1, Value: unix.CLONE_NEWNS | unix.CLONE_NEWUTS | unix.CLONE_NEWIPC | unix.CLONE_NEWUSER | unix.CLONE_NEWPID | unix.CLONE_NEWNET | unix.CLONE_NEWCGROUP, ValueTwo: 0, Op: specs.OpMaskedEqual, }, }, }, Comment: "s390 parameter ordering for clone is different", Includes: &Filter{ Arches: []string{"s390", "s390x"}, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone3", }, Action: specs.ActErrno, ErrnoRet: &nosys, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "reboot", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_BOOT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "chroot", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_CHROOT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "delete_module", "init_module", "finit_module", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_MODULE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "acct", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_PACCT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "kcmp", "pidfd_getfd", "process_madvise", "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_PTRACE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "iopl", "ioperm", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_RAWIO"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "settimeofday", "stime", "clock_settime", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_TIME"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "vhangup", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_TTY_CONFIG"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "get_mempolicy", "mbind", "set_mempolicy", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_NICE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "syslog", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYSLOG"}, }, }, } return &Seccomp{ LinuxSeccomp: specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, }, ArchMap: arches(), Syscalls: syscalls, } }
// +build seccomp package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "github.com/opencontainers/runtime-spec/specs-go" "golang.org/x/sys/unix" ) func arches() []Architecture { return []Architecture{ { Arch: specs.ArchX86_64, SubArches: []specs.Arch{specs.ArchX86, specs.ArchX32}, }, { Arch: specs.ArchAARCH64, SubArches: []specs.Arch{specs.ArchARM}, }, { Arch: specs.ArchMIPS64, SubArches: []specs.Arch{specs.ArchMIPS, specs.ArchMIPS64N32}, }, { Arch: specs.ArchMIPS64N32, SubArches: []specs.Arch{specs.ArchMIPS, specs.ArchMIPS64}, }, { Arch: specs.ArchMIPSEL64, SubArches: []specs.Arch{specs.ArchMIPSEL, specs.ArchMIPSEL64N32}, }, { Arch: specs.ArchMIPSEL64N32, SubArches: []specs.Arch{specs.ArchMIPSEL, specs.ArchMIPSEL64}, }, { Arch: specs.ArchS390X, SubArches: []specs.Arch{specs.ArchS390}, }, } } // DefaultProfile defines the allowed syscalls for the default seccomp profile. func DefaultProfile() *Seccomp { nosys := uint(unix.ENOSYS) syscalls := []*Syscall{ { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "accept", "accept4", "access", "adjtimex", "alarm", "bind", "brk", "capget", "capset", "chdir", "chmod", "chown", "chown32", "clock_adjtime", "clock_adjtime64", "clock_getres", "clock_getres_time64", "clock_gettime", "clock_gettime64", "clock_nanosleep", "clock_nanosleep_time64", "close", "close_range", "connect", "copy_file_range", "creat", "dup", "dup2", "dup3", "epoll_create", "epoll_create1", "epoll_ctl", "epoll_ctl_old", "epoll_pwait", "epoll_pwait2", "epoll_wait", "epoll_wait_old", "eventfd", "eventfd2", "execve", "execveat", "exit", "exit_group", "faccessat", "faccessat2", "fadvise64", "fadvise64_64", "fallocate", "fanotify_mark", "fchdir", "fchmod", "fchmodat", "fchown", "fchown32", "fchownat", "fcntl", "fcntl64", "fdatasync", "fgetxattr", "flistxattr", "flock", "fork", "fremovexattr", "fsetxattr", "fstat", "fstat64", "fstatat64", "fstatfs", "fstatfs64", "fsync", "ftruncate", "ftruncate64", "futex", "futex_time64", "futimesat", "getcpu", "getcwd", "getdents", "getdents64", "getegid", "getegid32", "geteuid", "geteuid32", "getgid", "getgid32", "getgroups", "getgroups32", "getitimer", "getpeername", "getpgid", "getpgrp", "getpid", "getppid", "getpriority", "getrandom", "getresgid", "getresgid32", "getresuid", "getresuid32", "getrlimit", "get_robust_list", "getrusage", "getsid", "getsockname", "getsockopt", "get_thread_area", "gettid", "gettimeofday", "getuid", "getuid32", "getxattr", "inotify_add_watch", "inotify_init", "inotify_init1", "inotify_rm_watch", "io_cancel", "ioctl", "io_destroy", "io_getevents", "io_pgetevents", "io_pgetevents_time64", "ioprio_get", "ioprio_set", "io_setup", "io_submit", "io_uring_enter", "io_uring_register", "io_uring_setup", "ipc", "kill", "lchown", "lchown32", "lgetxattr", "link", "linkat", "listen", "listxattr", "llistxattr", "_llseek", "lremovexattr", "lseek", "lsetxattr", "lstat", "lstat64", "madvise", "membarrier", "memfd_create", "mincore", "mkdir", "mkdirat", "mknod", "mknodat", "mlock", "mlock2", "mlockall", "mmap", "mmap2", "mprotect", "mq_getsetattr", "mq_notify", "mq_open", "mq_timedreceive", "mq_timedreceive_time64", "mq_timedsend", "mq_timedsend_time64", "mq_unlink", "mremap", "msgctl", "msgget", "msgrcv", "msgsnd", "msync", "munlock", "munlockall", "munmap", "nanosleep", "newfstatat", "_newselect", "open", "openat", "openat2", "pause", "pidfd_open", "pidfd_send_signal", "pipe", "pipe2", "poll", "ppoll", "ppoll_time64", "prctl", "pread64", "preadv", "preadv2", "prlimit64", "pselect6", "pselect6_time64", "pwrite64", "pwritev", "pwritev2", "read", "readahead", "readlink", "readlinkat", "readv", "recv", "recvfrom", "recvmmsg", "recvmmsg_time64", "recvmsg", "remap_file_pages", "removexattr", "rename", "renameat", "renameat2", "restart_syscall", "rmdir", "rseq", "rt_sigaction", "rt_sigpending", "rt_sigprocmask", "rt_sigqueueinfo", "rt_sigreturn", "rt_sigsuspend", "rt_sigtimedwait", "rt_sigtimedwait_time64", "rt_tgsigqueueinfo", "sched_getaffinity", "sched_getattr", "sched_getparam", "sched_get_priority_max", "sched_get_priority_min", "sched_getscheduler", "sched_rr_get_interval", "sched_rr_get_interval_time64", "sched_setaffinity", "sched_setattr", "sched_setparam", "sched_setscheduler", "sched_yield", "seccomp", "select", "semctl", "semget", "semop", "semtimedop", "semtimedop_time64", "send", "sendfile", "sendfile64", "sendmmsg", "sendmsg", "sendto", "setfsgid", "setfsgid32", "setfsuid", "setfsuid32", "setgid", "setgid32", "setgroups", "setgroups32", "setitimer", "setpgid", "setpriority", "setregid", "setregid32", "setresgid", "setresgid32", "setresuid", "setresuid32", "setreuid", "setreuid32", "setrlimit", "set_robust_list", "setsid", "setsockopt", "set_thread_area", "set_tid_address", "setuid", "setuid32", "setxattr", "shmat", "shmctl", "shmdt", "shmget", "shutdown", "sigaltstack", "signalfd", "signalfd4", "sigprocmask", "sigreturn", "socket", "socketcall", "socketpair", "splice", "stat", "stat64", "statfs", "statfs64", "statx", "symlink", "symlinkat", "sync", "sync_file_range", "syncfs", "sysinfo", "tee", "tgkill", "time", "timer_create", "timer_delete", "timer_getoverrun", "timer_gettime", "timer_gettime64", "timer_settime", "timer_settime64", "timerfd_create", "timerfd_gettime", "timerfd_gettime64", "timerfd_settime", "timerfd_settime64", "times", "tkill", "truncate", "truncate64", "ugetrlimit", "umask", "uname", "unlink", "unlinkat", "utime", "utimensat", "utimensat_time64", "utimes", "vfork", "vmsplice", "wait4", "waitid", "waitpid", "write", "writev", }, Action: specs.ActAllow, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, }, Includes: &Filter{ MinKernel: &KernelVersion{4, 8}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x0, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x0008, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x20000, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x20008, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0xffffffff, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "sync_file_range2", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"ppc64le"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "arm_fadvise64_64", "arm_sync_file_range", "sync_file_range2", "breakpoint", "cacheflush", "set_tls", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"arm", "arm64"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "arch_prctl", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"amd64", "x32"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "modify_ldt", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"amd64", "x32", "x86"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "s390_pci_mmio_read", "s390_pci_mmio_write", "s390_runtime_instr", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"s390", "s390x"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "open_by_handle_at", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_DAC_READ_SEARCH"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "bpf", "clone", "clone3", "fanotify_init", "fsconfig", "fsmount", "fsopen", "fspick", "lookup_dcookie", "mount", "move_mount", "name_to_handle_at", "open_tree", "perf_event_open", "quotactl", "setdomainname", "sethostname", "setns", "syslog", "umount", "umount2", "unshare", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: unix.CLONE_NEWNS | unix.CLONE_NEWUTS | unix.CLONE_NEWIPC | unix.CLONE_NEWUSER | unix.CLONE_NEWPID | unix.CLONE_NEWNET | unix.CLONE_NEWCGROUP, ValueTwo: 0, Op: specs.OpMaskedEqual, }, }, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, Arches: []string{"s390", "s390x"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 1, Value: unix.CLONE_NEWNS | unix.CLONE_NEWUTS | unix.CLONE_NEWIPC | unix.CLONE_NEWUSER | unix.CLONE_NEWPID | unix.CLONE_NEWNET | unix.CLONE_NEWCGROUP, ValueTwo: 0, Op: specs.OpMaskedEqual, }, }, }, Comment: "s390 parameter ordering for clone is different", Includes: &Filter{ Arches: []string{"s390", "s390x"}, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone3", }, Action: specs.ActErrno, ErrnoRet: &nosys, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "reboot", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_BOOT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "chroot", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_CHROOT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "delete_module", "init_module", "finit_module", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_MODULE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "acct", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_PACCT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "kcmp", "pidfd_getfd", "process_madvise", "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_PTRACE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "iopl", "ioperm", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_RAWIO"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "settimeofday", "stime", "clock_settime", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_TIME"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "vhangup", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_TTY_CONFIG"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "get_mempolicy", "mbind", "set_mempolicy", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_NICE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "syslog", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYSLOG"}, }, }, } errnoRet := uint(unix.EPERM) return &Seccomp{ LinuxSeccomp: specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &errnoRet, }, ArchMap: arches(), Syscalls: syscalls, } }
rata
7672963eec42e65e045f6eb745f5fe0682df5434
2480bebf59c25991ef88ba5229c7a2e65237510c
Would still be nice to use the constant, then there is no need for a comment.
cpuguy83
4,537
moby/moby
42,649
seccomp: Use explicit DefaultErrnoRet
Since commit "seccomp: Sync fields with runtime-spec fields" (5d244675bdb23e8fce427036c03517243f344cd4) we support to specify the DefaultErrnoRet to be used. Before that commit it was not specified and EPERM was used by default. This commit keeps the same behaviour but just makes it explicit that the default is EPERM. Signed-off-by: Rodrigo Campos <[email protected]> This a follow-up of https://github.com/moby/moby/pull/42604#issuecomment-876632771, as suggested by @thaJeztah. This just adds an explicit default to EPERM. Right now the default is EPERM but is implicit. <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Added the field `DefaultErrnoRet` to seccomp profiles **- How I did it** Added the field to default profile in `profiles/seccomp/default_linux.go`, then adjusted the tests to expect this new field **- How to verify it** You can run unit tests with: `TESTDIRS='github.com/docker/docker/profiles/seccomp' make test-unit`. You can also run `hack/validate/default-seccomp` to verify nothing was missing in the changes. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> Add an explicit DefaultErrnoRet field in the default seccomp profile. No behavior change. **- A picture of a cute animal (not mandatory but encouraged)** ![](https://www.kinderundjugendmedien.de/images/Ratatouille_pixar.jpg)
null
2021-07-16 16:36:59+00:00
2021-08-03 13:13:42+00:00
profiles/seccomp/default_linux.go
// +build seccomp package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "github.com/opencontainers/runtime-spec/specs-go" "golang.org/x/sys/unix" ) func arches() []Architecture { return []Architecture{ { Arch: specs.ArchX86_64, SubArches: []specs.Arch{specs.ArchX86, specs.ArchX32}, }, { Arch: specs.ArchAARCH64, SubArches: []specs.Arch{specs.ArchARM}, }, { Arch: specs.ArchMIPS64, SubArches: []specs.Arch{specs.ArchMIPS, specs.ArchMIPS64N32}, }, { Arch: specs.ArchMIPS64N32, SubArches: []specs.Arch{specs.ArchMIPS, specs.ArchMIPS64}, }, { Arch: specs.ArchMIPSEL64, SubArches: []specs.Arch{specs.ArchMIPSEL, specs.ArchMIPSEL64N32}, }, { Arch: specs.ArchMIPSEL64N32, SubArches: []specs.Arch{specs.ArchMIPSEL, specs.ArchMIPSEL64}, }, { Arch: specs.ArchS390X, SubArches: []specs.Arch{specs.ArchS390}, }, } } // DefaultProfile defines the allowed syscalls for the default seccomp profile. func DefaultProfile() *Seccomp { nosys := uint(unix.ENOSYS) syscalls := []*Syscall{ { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "accept", "accept4", "access", "adjtimex", "alarm", "bind", "brk", "capget", "capset", "chdir", "chmod", "chown", "chown32", "clock_adjtime", "clock_adjtime64", "clock_getres", "clock_getres_time64", "clock_gettime", "clock_gettime64", "clock_nanosleep", "clock_nanosleep_time64", "close", "close_range", "connect", "copy_file_range", "creat", "dup", "dup2", "dup3", "epoll_create", "epoll_create1", "epoll_ctl", "epoll_ctl_old", "epoll_pwait", "epoll_pwait2", "epoll_wait", "epoll_wait_old", "eventfd", "eventfd2", "execve", "execveat", "exit", "exit_group", "faccessat", "faccessat2", "fadvise64", "fadvise64_64", "fallocate", "fanotify_mark", "fchdir", "fchmod", "fchmodat", "fchown", "fchown32", "fchownat", "fcntl", "fcntl64", "fdatasync", "fgetxattr", "flistxattr", "flock", "fork", "fremovexattr", "fsetxattr", "fstat", "fstat64", "fstatat64", "fstatfs", "fstatfs64", "fsync", "ftruncate", "ftruncate64", "futex", "futex_time64", "futimesat", "getcpu", "getcwd", "getdents", "getdents64", "getegid", "getegid32", "geteuid", "geteuid32", "getgid", "getgid32", "getgroups", "getgroups32", "getitimer", "getpeername", "getpgid", "getpgrp", "getpid", "getppid", "getpriority", "getrandom", "getresgid", "getresgid32", "getresuid", "getresuid32", "getrlimit", "get_robust_list", "getrusage", "getsid", "getsockname", "getsockopt", "get_thread_area", "gettid", "gettimeofday", "getuid", "getuid32", "getxattr", "inotify_add_watch", "inotify_init", "inotify_init1", "inotify_rm_watch", "io_cancel", "ioctl", "io_destroy", "io_getevents", "io_pgetevents", "io_pgetevents_time64", "ioprio_get", "ioprio_set", "io_setup", "io_submit", "io_uring_enter", "io_uring_register", "io_uring_setup", "ipc", "kill", "lchown", "lchown32", "lgetxattr", "link", "linkat", "listen", "listxattr", "llistxattr", "_llseek", "lremovexattr", "lseek", "lsetxattr", "lstat", "lstat64", "madvise", "membarrier", "memfd_create", "mincore", "mkdir", "mkdirat", "mknod", "mknodat", "mlock", "mlock2", "mlockall", "mmap", "mmap2", "mprotect", "mq_getsetattr", "mq_notify", "mq_open", "mq_timedreceive", "mq_timedreceive_time64", "mq_timedsend", "mq_timedsend_time64", "mq_unlink", "mremap", "msgctl", "msgget", "msgrcv", "msgsnd", "msync", "munlock", "munlockall", "munmap", "nanosleep", "newfstatat", "_newselect", "open", "openat", "openat2", "pause", "pidfd_open", "pidfd_send_signal", "pipe", "pipe2", "poll", "ppoll", "ppoll_time64", "prctl", "pread64", "preadv", "preadv2", "prlimit64", "pselect6", "pselect6_time64", "pwrite64", "pwritev", "pwritev2", "read", "readahead", "readlink", "readlinkat", "readv", "recv", "recvfrom", "recvmmsg", "recvmmsg_time64", "recvmsg", "remap_file_pages", "removexattr", "rename", "renameat", "renameat2", "restart_syscall", "rmdir", "rseq", "rt_sigaction", "rt_sigpending", "rt_sigprocmask", "rt_sigqueueinfo", "rt_sigreturn", "rt_sigsuspend", "rt_sigtimedwait", "rt_sigtimedwait_time64", "rt_tgsigqueueinfo", "sched_getaffinity", "sched_getattr", "sched_getparam", "sched_get_priority_max", "sched_get_priority_min", "sched_getscheduler", "sched_rr_get_interval", "sched_rr_get_interval_time64", "sched_setaffinity", "sched_setattr", "sched_setparam", "sched_setscheduler", "sched_yield", "seccomp", "select", "semctl", "semget", "semop", "semtimedop", "semtimedop_time64", "send", "sendfile", "sendfile64", "sendmmsg", "sendmsg", "sendto", "setfsgid", "setfsgid32", "setfsuid", "setfsuid32", "setgid", "setgid32", "setgroups", "setgroups32", "setitimer", "setpgid", "setpriority", "setregid", "setregid32", "setresgid", "setresgid32", "setresuid", "setresuid32", "setreuid", "setreuid32", "setrlimit", "set_robust_list", "setsid", "setsockopt", "set_thread_area", "set_tid_address", "setuid", "setuid32", "setxattr", "shmat", "shmctl", "shmdt", "shmget", "shutdown", "sigaltstack", "signalfd", "signalfd4", "sigprocmask", "sigreturn", "socket", "socketcall", "socketpair", "splice", "stat", "stat64", "statfs", "statfs64", "statx", "symlink", "symlinkat", "sync", "sync_file_range", "syncfs", "sysinfo", "tee", "tgkill", "time", "timer_create", "timer_delete", "timer_getoverrun", "timer_gettime", "timer_gettime64", "timer_settime", "timer_settime64", "timerfd_create", "timerfd_gettime", "timerfd_gettime64", "timerfd_settime", "timerfd_settime64", "times", "tkill", "truncate", "truncate64", "ugetrlimit", "umask", "uname", "unlink", "unlinkat", "utime", "utimensat", "utimensat_time64", "utimes", "vfork", "vmsplice", "wait4", "waitid", "waitpid", "write", "writev", }, Action: specs.ActAllow, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, }, Includes: &Filter{ MinKernel: &KernelVersion{4, 8}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x0, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x0008, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x20000, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x20008, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0xffffffff, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "sync_file_range2", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"ppc64le"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "arm_fadvise64_64", "arm_sync_file_range", "sync_file_range2", "breakpoint", "cacheflush", "set_tls", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"arm", "arm64"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "arch_prctl", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"amd64", "x32"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "modify_ldt", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"amd64", "x32", "x86"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "s390_pci_mmio_read", "s390_pci_mmio_write", "s390_runtime_instr", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"s390", "s390x"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "open_by_handle_at", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_DAC_READ_SEARCH"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "bpf", "clone", "clone3", "fanotify_init", "fsconfig", "fsmount", "fsopen", "fspick", "lookup_dcookie", "mount", "move_mount", "name_to_handle_at", "open_tree", "perf_event_open", "quotactl", "setdomainname", "sethostname", "setns", "syslog", "umount", "umount2", "unshare", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: unix.CLONE_NEWNS | unix.CLONE_NEWUTS | unix.CLONE_NEWIPC | unix.CLONE_NEWUSER | unix.CLONE_NEWPID | unix.CLONE_NEWNET | unix.CLONE_NEWCGROUP, ValueTwo: 0, Op: specs.OpMaskedEqual, }, }, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, Arches: []string{"s390", "s390x"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 1, Value: unix.CLONE_NEWNS | unix.CLONE_NEWUTS | unix.CLONE_NEWIPC | unix.CLONE_NEWUSER | unix.CLONE_NEWPID | unix.CLONE_NEWNET | unix.CLONE_NEWCGROUP, ValueTwo: 0, Op: specs.OpMaskedEqual, }, }, }, Comment: "s390 parameter ordering for clone is different", Includes: &Filter{ Arches: []string{"s390", "s390x"}, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone3", }, Action: specs.ActErrno, ErrnoRet: &nosys, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "reboot", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_BOOT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "chroot", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_CHROOT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "delete_module", "init_module", "finit_module", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_MODULE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "acct", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_PACCT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "kcmp", "pidfd_getfd", "process_madvise", "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_PTRACE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "iopl", "ioperm", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_RAWIO"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "settimeofday", "stime", "clock_settime", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_TIME"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "vhangup", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_TTY_CONFIG"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "get_mempolicy", "mbind", "set_mempolicy", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_NICE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "syslog", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYSLOG"}, }, }, } return &Seccomp{ LinuxSeccomp: specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, }, ArchMap: arches(), Syscalls: syscalls, } }
// +build seccomp package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "github.com/opencontainers/runtime-spec/specs-go" "golang.org/x/sys/unix" ) func arches() []Architecture { return []Architecture{ { Arch: specs.ArchX86_64, SubArches: []specs.Arch{specs.ArchX86, specs.ArchX32}, }, { Arch: specs.ArchAARCH64, SubArches: []specs.Arch{specs.ArchARM}, }, { Arch: specs.ArchMIPS64, SubArches: []specs.Arch{specs.ArchMIPS, specs.ArchMIPS64N32}, }, { Arch: specs.ArchMIPS64N32, SubArches: []specs.Arch{specs.ArchMIPS, specs.ArchMIPS64}, }, { Arch: specs.ArchMIPSEL64, SubArches: []specs.Arch{specs.ArchMIPSEL, specs.ArchMIPSEL64N32}, }, { Arch: specs.ArchMIPSEL64N32, SubArches: []specs.Arch{specs.ArchMIPSEL, specs.ArchMIPSEL64}, }, { Arch: specs.ArchS390X, SubArches: []specs.Arch{specs.ArchS390}, }, } } // DefaultProfile defines the allowed syscalls for the default seccomp profile. func DefaultProfile() *Seccomp { nosys := uint(unix.ENOSYS) syscalls := []*Syscall{ { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "accept", "accept4", "access", "adjtimex", "alarm", "bind", "brk", "capget", "capset", "chdir", "chmod", "chown", "chown32", "clock_adjtime", "clock_adjtime64", "clock_getres", "clock_getres_time64", "clock_gettime", "clock_gettime64", "clock_nanosleep", "clock_nanosleep_time64", "close", "close_range", "connect", "copy_file_range", "creat", "dup", "dup2", "dup3", "epoll_create", "epoll_create1", "epoll_ctl", "epoll_ctl_old", "epoll_pwait", "epoll_pwait2", "epoll_wait", "epoll_wait_old", "eventfd", "eventfd2", "execve", "execveat", "exit", "exit_group", "faccessat", "faccessat2", "fadvise64", "fadvise64_64", "fallocate", "fanotify_mark", "fchdir", "fchmod", "fchmodat", "fchown", "fchown32", "fchownat", "fcntl", "fcntl64", "fdatasync", "fgetxattr", "flistxattr", "flock", "fork", "fremovexattr", "fsetxattr", "fstat", "fstat64", "fstatat64", "fstatfs", "fstatfs64", "fsync", "ftruncate", "ftruncate64", "futex", "futex_time64", "futimesat", "getcpu", "getcwd", "getdents", "getdents64", "getegid", "getegid32", "geteuid", "geteuid32", "getgid", "getgid32", "getgroups", "getgroups32", "getitimer", "getpeername", "getpgid", "getpgrp", "getpid", "getppid", "getpriority", "getrandom", "getresgid", "getresgid32", "getresuid", "getresuid32", "getrlimit", "get_robust_list", "getrusage", "getsid", "getsockname", "getsockopt", "get_thread_area", "gettid", "gettimeofday", "getuid", "getuid32", "getxattr", "inotify_add_watch", "inotify_init", "inotify_init1", "inotify_rm_watch", "io_cancel", "ioctl", "io_destroy", "io_getevents", "io_pgetevents", "io_pgetevents_time64", "ioprio_get", "ioprio_set", "io_setup", "io_submit", "io_uring_enter", "io_uring_register", "io_uring_setup", "ipc", "kill", "lchown", "lchown32", "lgetxattr", "link", "linkat", "listen", "listxattr", "llistxattr", "_llseek", "lremovexattr", "lseek", "lsetxattr", "lstat", "lstat64", "madvise", "membarrier", "memfd_create", "mincore", "mkdir", "mkdirat", "mknod", "mknodat", "mlock", "mlock2", "mlockall", "mmap", "mmap2", "mprotect", "mq_getsetattr", "mq_notify", "mq_open", "mq_timedreceive", "mq_timedreceive_time64", "mq_timedsend", "mq_timedsend_time64", "mq_unlink", "mremap", "msgctl", "msgget", "msgrcv", "msgsnd", "msync", "munlock", "munlockall", "munmap", "nanosleep", "newfstatat", "_newselect", "open", "openat", "openat2", "pause", "pidfd_open", "pidfd_send_signal", "pipe", "pipe2", "poll", "ppoll", "ppoll_time64", "prctl", "pread64", "preadv", "preadv2", "prlimit64", "pselect6", "pselect6_time64", "pwrite64", "pwritev", "pwritev2", "read", "readahead", "readlink", "readlinkat", "readv", "recv", "recvfrom", "recvmmsg", "recvmmsg_time64", "recvmsg", "remap_file_pages", "removexattr", "rename", "renameat", "renameat2", "restart_syscall", "rmdir", "rseq", "rt_sigaction", "rt_sigpending", "rt_sigprocmask", "rt_sigqueueinfo", "rt_sigreturn", "rt_sigsuspend", "rt_sigtimedwait", "rt_sigtimedwait_time64", "rt_tgsigqueueinfo", "sched_getaffinity", "sched_getattr", "sched_getparam", "sched_get_priority_max", "sched_get_priority_min", "sched_getscheduler", "sched_rr_get_interval", "sched_rr_get_interval_time64", "sched_setaffinity", "sched_setattr", "sched_setparam", "sched_setscheduler", "sched_yield", "seccomp", "select", "semctl", "semget", "semop", "semtimedop", "semtimedop_time64", "send", "sendfile", "sendfile64", "sendmmsg", "sendmsg", "sendto", "setfsgid", "setfsgid32", "setfsuid", "setfsuid32", "setgid", "setgid32", "setgroups", "setgroups32", "setitimer", "setpgid", "setpriority", "setregid", "setregid32", "setresgid", "setresgid32", "setresuid", "setresuid32", "setreuid", "setreuid32", "setrlimit", "set_robust_list", "setsid", "setsockopt", "set_thread_area", "set_tid_address", "setuid", "setuid32", "setxattr", "shmat", "shmctl", "shmdt", "shmget", "shutdown", "sigaltstack", "signalfd", "signalfd4", "sigprocmask", "sigreturn", "socket", "socketcall", "socketpair", "splice", "stat", "stat64", "statfs", "statfs64", "statx", "symlink", "symlinkat", "sync", "sync_file_range", "syncfs", "sysinfo", "tee", "tgkill", "time", "timer_create", "timer_delete", "timer_getoverrun", "timer_gettime", "timer_gettime64", "timer_settime", "timer_settime64", "timerfd_create", "timerfd_gettime", "timerfd_gettime64", "timerfd_settime", "timerfd_settime64", "times", "tkill", "truncate", "truncate64", "ugetrlimit", "umask", "uname", "unlink", "unlinkat", "utime", "utimensat", "utimensat_time64", "utimes", "vfork", "vmsplice", "wait4", "waitid", "waitpid", "write", "writev", }, Action: specs.ActAllow, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, }, Includes: &Filter{ MinKernel: &KernelVersion{4, 8}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x0, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x0008, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x20000, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0x20008, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{"personality"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: 0xffffffff, Op: specs.OpEqualTo, }, }, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "sync_file_range2", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"ppc64le"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "arm_fadvise64_64", "arm_sync_file_range", "sync_file_range2", "breakpoint", "cacheflush", "set_tls", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"arm", "arm64"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "arch_prctl", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"amd64", "x32"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "modify_ldt", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"amd64", "x32", "x86"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "s390_pci_mmio_read", "s390_pci_mmio_write", "s390_runtime_instr", }, Action: specs.ActAllow, }, Includes: &Filter{ Arches: []string{"s390", "s390x"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "open_by_handle_at", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_DAC_READ_SEARCH"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "bpf", "clone", "clone3", "fanotify_init", "fsconfig", "fsmount", "fsopen", "fspick", "lookup_dcookie", "mount", "move_mount", "name_to_handle_at", "open_tree", "perf_event_open", "quotactl", "setdomainname", "sethostname", "setns", "syslog", "umount", "umount2", "unshare", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 0, Value: unix.CLONE_NEWNS | unix.CLONE_NEWUTS | unix.CLONE_NEWIPC | unix.CLONE_NEWUSER | unix.CLONE_NEWPID | unix.CLONE_NEWNET | unix.CLONE_NEWCGROUP, ValueTwo: 0, Op: specs.OpMaskedEqual, }, }, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, Arches: []string{"s390", "s390x"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{ { Index: 1, Value: unix.CLONE_NEWNS | unix.CLONE_NEWUTS | unix.CLONE_NEWIPC | unix.CLONE_NEWUSER | unix.CLONE_NEWPID | unix.CLONE_NEWNET | unix.CLONE_NEWCGROUP, ValueTwo: 0, Op: specs.OpMaskedEqual, }, }, }, Comment: "s390 parameter ordering for clone is different", Includes: &Filter{ Arches: []string{"s390", "s390x"}, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "clone3", }, Action: specs.ActErrno, ErrnoRet: &nosys, }, Excludes: &Filter{ Caps: []string{"CAP_SYS_ADMIN"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "reboot", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_BOOT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "chroot", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_CHROOT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "delete_module", "init_module", "finit_module", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_MODULE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "acct", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_PACCT"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "kcmp", "pidfd_getfd", "process_madvise", "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_PTRACE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "iopl", "ioperm", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_RAWIO"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "settimeofday", "stime", "clock_settime", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_TIME"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "vhangup", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_TTY_CONFIG"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "get_mempolicy", "mbind", "set_mempolicy", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYS_NICE"}, }, }, { LinuxSyscall: specs.LinuxSyscall{ Names: []string{ "syslog", }, Action: specs.ActAllow, }, Includes: &Filter{ Caps: []string{"CAP_SYSLOG"}, }, }, } errnoRet := uint(unix.EPERM) return &Seccomp{ LinuxSeccomp: specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &errnoRet, }, ArchMap: arches(), Syscalls: syscalls, } }
rata
7672963eec42e65e045f6eb745f5fe0682df5434
2480bebf59c25991ef88ba5229c7a2e65237510c
Using the constant now, thanks!
rata
4,538
moby/moby
42,649
seccomp: Use explicit DefaultErrnoRet
Since commit "seccomp: Sync fields with runtime-spec fields" (5d244675bdb23e8fce427036c03517243f344cd4) we support to specify the DefaultErrnoRet to be used. Before that commit it was not specified and EPERM was used by default. This commit keeps the same behaviour but just makes it explicit that the default is EPERM. Signed-off-by: Rodrigo Campos <[email protected]> This a follow-up of https://github.com/moby/moby/pull/42604#issuecomment-876632771, as suggested by @thaJeztah. This just adds an explicit default to EPERM. Right now the default is EPERM but is implicit. <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Added the field `DefaultErrnoRet` to seccomp profiles **- How I did it** Added the field to default profile in `profiles/seccomp/default_linux.go`, then adjusted the tests to expect this new field **- How to verify it** You can run unit tests with: `TESTDIRS='github.com/docker/docker/profiles/seccomp' make test-unit`. You can also run `hack/validate/default-seccomp` to verify nothing was missing in the changes. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> Add an explicit DefaultErrnoRet field in the default seccomp profile. No behavior change. **- A picture of a cute animal (not mandatory but encouraged)** ![](https://www.kinderundjugendmedien.de/images/Ratatouille_pixar.jpg)
null
2021-07-16 16:36:59+00:00
2021-08-03 13:13:42+00:00
profiles/seccomp/seccomp_test.go
// +build linux package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "encoding/json" "io/ioutil" "strings" "testing" "github.com/opencontainers/runtime-spec/specs-go" "gotest.tools/v3/assert" ) func TestLoadProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/example.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } var expectedErrno uint = 12345 expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Syscalls: []specs.LinuxSyscall{ { Names: []string{"clone"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{{ Index: 0, Value: 2114060288, ValueTwo: 0, Op: specs.OpMaskedEqual, }}, }, { Names: []string{"open"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"close"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"syslog"}, Action: specs.ActErrno, ErrnoRet: &expectedErrno, Args: []specs.LinuxSeccompArg{}, }, }, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithDefaultErrnoRet(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 6 }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expectedErrnoRet := uint(6) expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedErrnoRet, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithListenerPath(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "listenerPath": "/var/run/seccompaget.sock", "listenerMetadata": "opaque-metadata" }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, ListenerPath: "/var/run/seccompaget.sock", ListenerMetadata: "opaque-metadata", } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithFlag(t *testing.T) { profile := `{"defaultAction": "SCMP_ACT_ERRNO", "flags": ["SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"]}` expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Flags: []specs.LinuxSeccompFlag{"SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"}, } rs := createSpec() p, err := LoadProfile(profile, &rs) assert.NilError(t, err) assert.DeepEqual(t, expected, *p) } // TestLoadProfileValidation tests that invalid profiles produce the correct error. func TestLoadProfileValidation(t *testing.T) { tests := []struct { doc string profile string expected string }{ { doc: "conflicting architectures and archMap", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "architectures": ["A", "B", "C"], "archMap": [{"architecture": "A", "subArchitectures": ["B", "C"]}]}`, expected: `use either 'architectures' or 'archMap'`, }, { doc: "conflicting syscall.name and syscall.names", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "syscalls": [{"name": "accept", "names": ["accept"], "action": "SCMP_ACT_ALLOW"}]}`, expected: `use either 'name' or 'names'`, }, } for _, tc := range tests { tc := tc rs := createSpec() t.Run(tc.doc, func(t *testing.T) { _, err := LoadProfile(tc.profile, &rs) assert.ErrorContains(t, err, tc.expected) }) } } // TestLoadLegacyProfile tests loading a seccomp profile in the old format // (before https://github.com/docker/docker/pull/24510) func TestLoadLegacyProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/default-old-format.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) assert.NilError(t, err) assert.Equal(t, p.DefaultAction, specs.ActErrno) assert.DeepEqual(t, p.Architectures, []specs.Arch{"SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"}) assert.Equal(t, len(p.Syscalls), 311) expected := specs.LinuxSyscall{ Names: []string{"accept"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, } assert.DeepEqual(t, p.Syscalls[0], expected) } func TestLoadDefaultProfile(t *testing.T) { f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } rs := createSpec() if _, err := LoadProfile(string(f), &rs); err != nil { t.Fatal(err) } } func TestUnmarshalDefaultProfile(t *testing.T) { expected := DefaultProfile() if expected == nil { t.Skip("seccomp not supported") } f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } var profile Seccomp err = json.Unmarshal(f, &profile) if err != nil { t.Fatal(err) } assert.DeepEqual(t, expected.Architectures, profile.Architectures) assert.DeepEqual(t, expected.ArchMap, profile.ArchMap) assert.DeepEqual(t, expected.DefaultAction, profile.DefaultAction) assert.DeepEqual(t, expected.Syscalls, profile.Syscalls) } func TestMarshalUnmarshalFilter(t *testing.T) { t.Parallel() tests := []struct { in string out string error bool }{ {in: `{"arches":["s390x"],"minKernel":3}`, error: true}, {in: `{"arches":["s390x"],"minKernel":3.12}`, error: true}, {in: `{"arches":["s390x"],"minKernel":true}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"0.0"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":".3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3."}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"true"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3.12.1\""}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"4.15abc"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":null}`, out: `{"arches":["s390x"]}`}, {in: `{"arches":["s390x"],"minKernel":""}`, out: `{"arches":["s390x"],"minKernel":""}`}, // FIXME: try to fix omitempty for this {in: `{"arches":["s390x"],"minKernel":"0.5"}`, out: `{"arches":["s390x"],"minKernel":"0.5"}`}, {in: `{"arches":["s390x"],"minKernel":"0.50"}`, out: `{"arches":["s390x"],"minKernel":"0.50"}`}, {in: `{"arches":["s390x"],"minKernel":"5.0"}`, out: `{"arches":["s390x"],"minKernel":"5.0"}`}, {in: `{"arches":["s390x"],"minKernel":"50.0"}`, out: `{"arches":["s390x"],"minKernel":"50.0"}`}, {in: `{"arches":["s390x"],"minKernel":"4.15"}`, out: `{"arches":["s390x"],"minKernel":"4.15"}`}, } for _, tc := range tests { tc := tc t.Run(tc.in, func(t *testing.T) { var filter Filter err := json.Unmarshal([]byte(tc.in), &filter) if tc.error { if err == nil { t.Fatal("expected an error") } else if !strings.Contains(err.Error(), "invalid kernel version") { t.Fatal("unexpected error:", err) } return } if err != nil { t.Fatal(err) } out, err := json.Marshal(filter) if err != nil { t.Fatal(err) } if string(out) != tc.out { t.Fatalf("expected %s, got %s", tc.out, string(out)) } }) } } func TestLoadConditional(t *testing.T) { f, err := ioutil.ReadFile("fixtures/conditional_include.json") if err != nil { t.Fatal(err) } tests := []struct { doc string cap string expected []string }{ {doc: "no caps", expected: []string{"chmod", "ptrace"}}, {doc: "with syslog", cap: "CAP_SYSLOG", expected: []string{"chmod", "syslog", "ptrace"}}, {doc: "no ptrace", cap: "CAP_SYS_ADMIN", expected: []string{"chmod"}}, } for _, tc := range tests { tc := tc t.Run(tc.doc, func(t *testing.T) { rs := createSpec(tc.cap) p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } if len(p.Syscalls) != len(tc.expected) { t.Fatalf("expected %d syscalls in profile, have %d", len(tc.expected), len(p.Syscalls)) } for i, v := range p.Syscalls { if v.Names[0] != tc.expected[i] { t.Fatalf("expected %s syscall, have %s", tc.expected[i], v.Names[0]) } } }) } } // createSpec() creates a minimum spec for testing func createSpec(caps ...string) specs.Spec { rs := specs.Spec{ Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{}, }, } if caps != nil { rs.Process.Capabilities.Bounding = append(rs.Process.Capabilities.Bounding, caps...) } return rs }
// +build linux package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "encoding/json" "io/ioutil" "strings" "testing" "github.com/opencontainers/runtime-spec/specs-go" "gotest.tools/v3/assert" ) func TestLoadProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/example.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } var expectedErrno uint = 12345 var expectedDefaultErrno uint = 1 expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedDefaultErrno, Syscalls: []specs.LinuxSyscall{ { Names: []string{"clone"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{{ Index: 0, Value: 2114060288, ValueTwo: 0, Op: specs.OpMaskedEqual, }}, }, { Names: []string{"open"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"close"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"syslog"}, Action: specs.ActErrno, ErrnoRet: &expectedErrno, Args: []specs.LinuxSeccompArg{}, }, }, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithDefaultErrnoRet(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 6 }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expectedErrnoRet := uint(6) expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedErrnoRet, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithListenerPath(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "listenerPath": "/var/run/seccompaget.sock", "listenerMetadata": "opaque-metadata" }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, ListenerPath: "/var/run/seccompaget.sock", ListenerMetadata: "opaque-metadata", } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithFlag(t *testing.T) { profile := `{"defaultAction": "SCMP_ACT_ERRNO", "flags": ["SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"]}` expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Flags: []specs.LinuxSeccompFlag{"SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"}, } rs := createSpec() p, err := LoadProfile(profile, &rs) assert.NilError(t, err) assert.DeepEqual(t, expected, *p) } // TestLoadProfileValidation tests that invalid profiles produce the correct error. func TestLoadProfileValidation(t *testing.T) { tests := []struct { doc string profile string expected string }{ { doc: "conflicting architectures and archMap", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "architectures": ["A", "B", "C"], "archMap": [{"architecture": "A", "subArchitectures": ["B", "C"]}]}`, expected: `use either 'architectures' or 'archMap'`, }, { doc: "conflicting syscall.name and syscall.names", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "syscalls": [{"name": "accept", "names": ["accept"], "action": "SCMP_ACT_ALLOW"}]}`, expected: `use either 'name' or 'names'`, }, } for _, tc := range tests { tc := tc rs := createSpec() t.Run(tc.doc, func(t *testing.T) { _, err := LoadProfile(tc.profile, &rs) assert.ErrorContains(t, err, tc.expected) }) } } // TestLoadLegacyProfile tests loading a seccomp profile in the old format // (before https://github.com/docker/docker/pull/24510) func TestLoadLegacyProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/default-old-format.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) assert.NilError(t, err) assert.Equal(t, p.DefaultAction, specs.ActErrno) assert.DeepEqual(t, p.Architectures, []specs.Arch{"SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"}) assert.Equal(t, len(p.Syscalls), 311) expected := specs.LinuxSyscall{ Names: []string{"accept"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, } assert.DeepEqual(t, p.Syscalls[0], expected) } func TestLoadDefaultProfile(t *testing.T) { f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } rs := createSpec() if _, err := LoadProfile(string(f), &rs); err != nil { t.Fatal(err) } } func TestUnmarshalDefaultProfile(t *testing.T) { expected := DefaultProfile() if expected == nil { t.Skip("seccomp not supported") } f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } var profile Seccomp err = json.Unmarshal(f, &profile) if err != nil { t.Fatal(err) } assert.DeepEqual(t, expected.Architectures, profile.Architectures) assert.DeepEqual(t, expected.ArchMap, profile.ArchMap) assert.DeepEqual(t, expected.DefaultAction, profile.DefaultAction) assert.DeepEqual(t, expected.Syscalls, profile.Syscalls) } func TestMarshalUnmarshalFilter(t *testing.T) { t.Parallel() tests := []struct { in string out string error bool }{ {in: `{"arches":["s390x"],"minKernel":3}`, error: true}, {in: `{"arches":["s390x"],"minKernel":3.12}`, error: true}, {in: `{"arches":["s390x"],"minKernel":true}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"0.0"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":".3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3."}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"true"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3.12.1\""}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"4.15abc"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":null}`, out: `{"arches":["s390x"]}`}, {in: `{"arches":["s390x"],"minKernel":""}`, out: `{"arches":["s390x"],"minKernel":""}`}, // FIXME: try to fix omitempty for this {in: `{"arches":["s390x"],"minKernel":"0.5"}`, out: `{"arches":["s390x"],"minKernel":"0.5"}`}, {in: `{"arches":["s390x"],"minKernel":"0.50"}`, out: `{"arches":["s390x"],"minKernel":"0.50"}`}, {in: `{"arches":["s390x"],"minKernel":"5.0"}`, out: `{"arches":["s390x"],"minKernel":"5.0"}`}, {in: `{"arches":["s390x"],"minKernel":"50.0"}`, out: `{"arches":["s390x"],"minKernel":"50.0"}`}, {in: `{"arches":["s390x"],"minKernel":"4.15"}`, out: `{"arches":["s390x"],"minKernel":"4.15"}`}, } for _, tc := range tests { tc := tc t.Run(tc.in, func(t *testing.T) { var filter Filter err := json.Unmarshal([]byte(tc.in), &filter) if tc.error { if err == nil { t.Fatal("expected an error") } else if !strings.Contains(err.Error(), "invalid kernel version") { t.Fatal("unexpected error:", err) } return } if err != nil { t.Fatal(err) } out, err := json.Marshal(filter) if err != nil { t.Fatal(err) } if string(out) != tc.out { t.Fatalf("expected %s, got %s", tc.out, string(out)) } }) } } func TestLoadConditional(t *testing.T) { f, err := ioutil.ReadFile("fixtures/conditional_include.json") if err != nil { t.Fatal(err) } tests := []struct { doc string cap string expected []string }{ {doc: "no caps", expected: []string{"chmod", "ptrace"}}, {doc: "with syslog", cap: "CAP_SYSLOG", expected: []string{"chmod", "syslog", "ptrace"}}, {doc: "no ptrace", cap: "CAP_SYS_ADMIN", expected: []string{"chmod"}}, } for _, tc := range tests { tc := tc t.Run(tc.doc, func(t *testing.T) { rs := createSpec(tc.cap) p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } if len(p.Syscalls) != len(tc.expected) { t.Fatalf("expected %d syscalls in profile, have %d", len(tc.expected), len(p.Syscalls)) } for i, v := range p.Syscalls { if v.Names[0] != tc.expected[i] { t.Fatalf("expected %s syscall, have %s", tc.expected[i], v.Names[0]) } } }) } } // createSpec() creates a minimum spec for testing func createSpec(caps ...string) specs.Spec { rs := specs.Spec{ Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{}, }, } if caps != nil { rs.Process.Capabilities.Bounding = append(rs.Process.Capabilities.Bounding, caps...) } return rs }
rata
7672963eec42e65e045f6eb745f5fe0682df5434
2480bebf59c25991ef88ba5229c7a2e65237510c
Wondering why you removed the test here; is it because the one in daemon/ already covers it?
thaJeztah
4,539
moby/moby
42,649
seccomp: Use explicit DefaultErrnoRet
Since commit "seccomp: Sync fields with runtime-spec fields" (5d244675bdb23e8fce427036c03517243f344cd4) we support to specify the DefaultErrnoRet to be used. Before that commit it was not specified and EPERM was used by default. This commit keeps the same behaviour but just makes it explicit that the default is EPERM. Signed-off-by: Rodrigo Campos <[email protected]> This a follow-up of https://github.com/moby/moby/pull/42604#issuecomment-876632771, as suggested by @thaJeztah. This just adds an explicit default to EPERM. Right now the default is EPERM but is implicit. <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Added the field `DefaultErrnoRet` to seccomp profiles **- How I did it** Added the field to default profile in `profiles/seccomp/default_linux.go`, then adjusted the tests to expect this new field **- How to verify it** You can run unit tests with: `TESTDIRS='github.com/docker/docker/profiles/seccomp' make test-unit`. You can also run `hack/validate/default-seccomp` to verify nothing was missing in the changes. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> Add an explicit DefaultErrnoRet field in the default seccomp profile. No behavior change. **- A picture of a cute animal (not mandatory but encouraged)** ![](https://www.kinderundjugendmedien.de/images/Ratatouille_pixar.jpg)
null
2021-07-16 16:36:59+00:00
2021-08-03 13:13:42+00:00
profiles/seccomp/seccomp_test.go
// +build linux package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "encoding/json" "io/ioutil" "strings" "testing" "github.com/opencontainers/runtime-spec/specs-go" "gotest.tools/v3/assert" ) func TestLoadProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/example.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } var expectedErrno uint = 12345 expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Syscalls: []specs.LinuxSyscall{ { Names: []string{"clone"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{{ Index: 0, Value: 2114060288, ValueTwo: 0, Op: specs.OpMaskedEqual, }}, }, { Names: []string{"open"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"close"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"syslog"}, Action: specs.ActErrno, ErrnoRet: &expectedErrno, Args: []specs.LinuxSeccompArg{}, }, }, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithDefaultErrnoRet(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 6 }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expectedErrnoRet := uint(6) expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedErrnoRet, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithListenerPath(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "listenerPath": "/var/run/seccompaget.sock", "listenerMetadata": "opaque-metadata" }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, ListenerPath: "/var/run/seccompaget.sock", ListenerMetadata: "opaque-metadata", } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithFlag(t *testing.T) { profile := `{"defaultAction": "SCMP_ACT_ERRNO", "flags": ["SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"]}` expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Flags: []specs.LinuxSeccompFlag{"SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"}, } rs := createSpec() p, err := LoadProfile(profile, &rs) assert.NilError(t, err) assert.DeepEqual(t, expected, *p) } // TestLoadProfileValidation tests that invalid profiles produce the correct error. func TestLoadProfileValidation(t *testing.T) { tests := []struct { doc string profile string expected string }{ { doc: "conflicting architectures and archMap", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "architectures": ["A", "B", "C"], "archMap": [{"architecture": "A", "subArchitectures": ["B", "C"]}]}`, expected: `use either 'architectures' or 'archMap'`, }, { doc: "conflicting syscall.name and syscall.names", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "syscalls": [{"name": "accept", "names": ["accept"], "action": "SCMP_ACT_ALLOW"}]}`, expected: `use either 'name' or 'names'`, }, } for _, tc := range tests { tc := tc rs := createSpec() t.Run(tc.doc, func(t *testing.T) { _, err := LoadProfile(tc.profile, &rs) assert.ErrorContains(t, err, tc.expected) }) } } // TestLoadLegacyProfile tests loading a seccomp profile in the old format // (before https://github.com/docker/docker/pull/24510) func TestLoadLegacyProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/default-old-format.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) assert.NilError(t, err) assert.Equal(t, p.DefaultAction, specs.ActErrno) assert.DeepEqual(t, p.Architectures, []specs.Arch{"SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"}) assert.Equal(t, len(p.Syscalls), 311) expected := specs.LinuxSyscall{ Names: []string{"accept"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, } assert.DeepEqual(t, p.Syscalls[0], expected) } func TestLoadDefaultProfile(t *testing.T) { f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } rs := createSpec() if _, err := LoadProfile(string(f), &rs); err != nil { t.Fatal(err) } } func TestUnmarshalDefaultProfile(t *testing.T) { expected := DefaultProfile() if expected == nil { t.Skip("seccomp not supported") } f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } var profile Seccomp err = json.Unmarshal(f, &profile) if err != nil { t.Fatal(err) } assert.DeepEqual(t, expected.Architectures, profile.Architectures) assert.DeepEqual(t, expected.ArchMap, profile.ArchMap) assert.DeepEqual(t, expected.DefaultAction, profile.DefaultAction) assert.DeepEqual(t, expected.Syscalls, profile.Syscalls) } func TestMarshalUnmarshalFilter(t *testing.T) { t.Parallel() tests := []struct { in string out string error bool }{ {in: `{"arches":["s390x"],"minKernel":3}`, error: true}, {in: `{"arches":["s390x"],"minKernel":3.12}`, error: true}, {in: `{"arches":["s390x"],"minKernel":true}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"0.0"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":".3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3."}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"true"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3.12.1\""}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"4.15abc"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":null}`, out: `{"arches":["s390x"]}`}, {in: `{"arches":["s390x"],"minKernel":""}`, out: `{"arches":["s390x"],"minKernel":""}`}, // FIXME: try to fix omitempty for this {in: `{"arches":["s390x"],"minKernel":"0.5"}`, out: `{"arches":["s390x"],"minKernel":"0.5"}`}, {in: `{"arches":["s390x"],"minKernel":"0.50"}`, out: `{"arches":["s390x"],"minKernel":"0.50"}`}, {in: `{"arches":["s390x"],"minKernel":"5.0"}`, out: `{"arches":["s390x"],"minKernel":"5.0"}`}, {in: `{"arches":["s390x"],"minKernel":"50.0"}`, out: `{"arches":["s390x"],"minKernel":"50.0"}`}, {in: `{"arches":["s390x"],"minKernel":"4.15"}`, out: `{"arches":["s390x"],"minKernel":"4.15"}`}, } for _, tc := range tests { tc := tc t.Run(tc.in, func(t *testing.T) { var filter Filter err := json.Unmarshal([]byte(tc.in), &filter) if tc.error { if err == nil { t.Fatal("expected an error") } else if !strings.Contains(err.Error(), "invalid kernel version") { t.Fatal("unexpected error:", err) } return } if err != nil { t.Fatal(err) } out, err := json.Marshal(filter) if err != nil { t.Fatal(err) } if string(out) != tc.out { t.Fatalf("expected %s, got %s", tc.out, string(out)) } }) } } func TestLoadConditional(t *testing.T) { f, err := ioutil.ReadFile("fixtures/conditional_include.json") if err != nil { t.Fatal(err) } tests := []struct { doc string cap string expected []string }{ {doc: "no caps", expected: []string{"chmod", "ptrace"}}, {doc: "with syslog", cap: "CAP_SYSLOG", expected: []string{"chmod", "syslog", "ptrace"}}, {doc: "no ptrace", cap: "CAP_SYS_ADMIN", expected: []string{"chmod"}}, } for _, tc := range tests { tc := tc t.Run(tc.doc, func(t *testing.T) { rs := createSpec(tc.cap) p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } if len(p.Syscalls) != len(tc.expected) { t.Fatalf("expected %d syscalls in profile, have %d", len(tc.expected), len(p.Syscalls)) } for i, v := range p.Syscalls { if v.Names[0] != tc.expected[i] { t.Fatalf("expected %s syscall, have %s", tc.expected[i], v.Names[0]) } } }) } } // createSpec() creates a minimum spec for testing func createSpec(caps ...string) specs.Spec { rs := specs.Spec{ Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{}, }, } if caps != nil { rs.Process.Capabilities.Bounding = append(rs.Process.Capabilities.Bounding, caps...) } return rs }
// +build linux package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "encoding/json" "io/ioutil" "strings" "testing" "github.com/opencontainers/runtime-spec/specs-go" "gotest.tools/v3/assert" ) func TestLoadProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/example.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } var expectedErrno uint = 12345 var expectedDefaultErrno uint = 1 expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedDefaultErrno, Syscalls: []specs.LinuxSyscall{ { Names: []string{"clone"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{{ Index: 0, Value: 2114060288, ValueTwo: 0, Op: specs.OpMaskedEqual, }}, }, { Names: []string{"open"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"close"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"syslog"}, Action: specs.ActErrno, ErrnoRet: &expectedErrno, Args: []specs.LinuxSeccompArg{}, }, }, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithDefaultErrnoRet(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 6 }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expectedErrnoRet := uint(6) expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedErrnoRet, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithListenerPath(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "listenerPath": "/var/run/seccompaget.sock", "listenerMetadata": "opaque-metadata" }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, ListenerPath: "/var/run/seccompaget.sock", ListenerMetadata: "opaque-metadata", } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithFlag(t *testing.T) { profile := `{"defaultAction": "SCMP_ACT_ERRNO", "flags": ["SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"]}` expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Flags: []specs.LinuxSeccompFlag{"SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"}, } rs := createSpec() p, err := LoadProfile(profile, &rs) assert.NilError(t, err) assert.DeepEqual(t, expected, *p) } // TestLoadProfileValidation tests that invalid profiles produce the correct error. func TestLoadProfileValidation(t *testing.T) { tests := []struct { doc string profile string expected string }{ { doc: "conflicting architectures and archMap", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "architectures": ["A", "B", "C"], "archMap": [{"architecture": "A", "subArchitectures": ["B", "C"]}]}`, expected: `use either 'architectures' or 'archMap'`, }, { doc: "conflicting syscall.name and syscall.names", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "syscalls": [{"name": "accept", "names": ["accept"], "action": "SCMP_ACT_ALLOW"}]}`, expected: `use either 'name' or 'names'`, }, } for _, tc := range tests { tc := tc rs := createSpec() t.Run(tc.doc, func(t *testing.T) { _, err := LoadProfile(tc.profile, &rs) assert.ErrorContains(t, err, tc.expected) }) } } // TestLoadLegacyProfile tests loading a seccomp profile in the old format // (before https://github.com/docker/docker/pull/24510) func TestLoadLegacyProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/default-old-format.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) assert.NilError(t, err) assert.Equal(t, p.DefaultAction, specs.ActErrno) assert.DeepEqual(t, p.Architectures, []specs.Arch{"SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"}) assert.Equal(t, len(p.Syscalls), 311) expected := specs.LinuxSyscall{ Names: []string{"accept"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, } assert.DeepEqual(t, p.Syscalls[0], expected) } func TestLoadDefaultProfile(t *testing.T) { f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } rs := createSpec() if _, err := LoadProfile(string(f), &rs); err != nil { t.Fatal(err) } } func TestUnmarshalDefaultProfile(t *testing.T) { expected := DefaultProfile() if expected == nil { t.Skip("seccomp not supported") } f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } var profile Seccomp err = json.Unmarshal(f, &profile) if err != nil { t.Fatal(err) } assert.DeepEqual(t, expected.Architectures, profile.Architectures) assert.DeepEqual(t, expected.ArchMap, profile.ArchMap) assert.DeepEqual(t, expected.DefaultAction, profile.DefaultAction) assert.DeepEqual(t, expected.Syscalls, profile.Syscalls) } func TestMarshalUnmarshalFilter(t *testing.T) { t.Parallel() tests := []struct { in string out string error bool }{ {in: `{"arches":["s390x"],"minKernel":3}`, error: true}, {in: `{"arches":["s390x"],"minKernel":3.12}`, error: true}, {in: `{"arches":["s390x"],"minKernel":true}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"0.0"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":".3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3."}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"true"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3.12.1\""}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"4.15abc"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":null}`, out: `{"arches":["s390x"]}`}, {in: `{"arches":["s390x"],"minKernel":""}`, out: `{"arches":["s390x"],"minKernel":""}`}, // FIXME: try to fix omitempty for this {in: `{"arches":["s390x"],"minKernel":"0.5"}`, out: `{"arches":["s390x"],"minKernel":"0.5"}`}, {in: `{"arches":["s390x"],"minKernel":"0.50"}`, out: `{"arches":["s390x"],"minKernel":"0.50"}`}, {in: `{"arches":["s390x"],"minKernel":"5.0"}`, out: `{"arches":["s390x"],"minKernel":"5.0"}`}, {in: `{"arches":["s390x"],"minKernel":"50.0"}`, out: `{"arches":["s390x"],"minKernel":"50.0"}`}, {in: `{"arches":["s390x"],"minKernel":"4.15"}`, out: `{"arches":["s390x"],"minKernel":"4.15"}`}, } for _, tc := range tests { tc := tc t.Run(tc.in, func(t *testing.T) { var filter Filter err := json.Unmarshal([]byte(tc.in), &filter) if tc.error { if err == nil { t.Fatal("expected an error") } else if !strings.Contains(err.Error(), "invalid kernel version") { t.Fatal("unexpected error:", err) } return } if err != nil { t.Fatal(err) } out, err := json.Marshal(filter) if err != nil { t.Fatal(err) } if string(out) != tc.out { t.Fatalf("expected %s, got %s", tc.out, string(out)) } }) } } func TestLoadConditional(t *testing.T) { f, err := ioutil.ReadFile("fixtures/conditional_include.json") if err != nil { t.Fatal(err) } tests := []struct { doc string cap string expected []string }{ {doc: "no caps", expected: []string{"chmod", "ptrace"}}, {doc: "with syslog", cap: "CAP_SYSLOG", expected: []string{"chmod", "syslog", "ptrace"}}, {doc: "no ptrace", cap: "CAP_SYS_ADMIN", expected: []string{"chmod"}}, } for _, tc := range tests { tc := tc t.Run(tc.doc, func(t *testing.T) { rs := createSpec(tc.cap) p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } if len(p.Syscalls) != len(tc.expected) { t.Fatalf("expected %d syscalls in profile, have %d", len(tc.expected), len(p.Syscalls)) } for i, v := range p.Syscalls { if v.Names[0] != tc.expected[i] { t.Fatalf("expected %s syscall, have %s", tc.expected[i], v.Names[0]) } } }) } } // createSpec() creates a minimum spec for testing func createSpec(caps ...string) specs.Spec { rs := specs.Spec{ Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{}, }, } if caps != nil { rs.Process.Capabilities.Bounding = append(rs.Process.Capabilities.Bounding, caps...) } return rs }
rata
7672963eec42e65e045f6eb745f5fe0682df5434
2480bebf59c25991ef88ba5229c7a2e65237510c
No, because now it is tested in the `TestLoadProfile` function here: https://github.com/moby/moby/pull/42649/files#diff-ee871717077bc31d70d119a348d475d268e45ff60583cc67a294103dd79b11f2R29 I can keep the test if you prefer :)
rata
4,540
moby/moby
42,649
seccomp: Use explicit DefaultErrnoRet
Since commit "seccomp: Sync fields with runtime-spec fields" (5d244675bdb23e8fce427036c03517243f344cd4) we support to specify the DefaultErrnoRet to be used. Before that commit it was not specified and EPERM was used by default. This commit keeps the same behaviour but just makes it explicit that the default is EPERM. Signed-off-by: Rodrigo Campos <[email protected]> This a follow-up of https://github.com/moby/moby/pull/42604#issuecomment-876632771, as suggested by @thaJeztah. This just adds an explicit default to EPERM. Right now the default is EPERM but is implicit. <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Added the field `DefaultErrnoRet` to seccomp profiles **- How I did it** Added the field to default profile in `profiles/seccomp/default_linux.go`, then adjusted the tests to expect this new field **- How to verify it** You can run unit tests with: `TESTDIRS='github.com/docker/docker/profiles/seccomp' make test-unit`. You can also run `hack/validate/default-seccomp` to verify nothing was missing in the changes. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> Add an explicit DefaultErrnoRet field in the default seccomp profile. No behavior change. **- A picture of a cute animal (not mandatory but encouraged)** ![](https://www.kinderundjugendmedien.de/images/Ratatouille_pixar.jpg)
null
2021-07-16 16:36:59+00:00
2021-08-03 13:13:42+00:00
profiles/seccomp/seccomp_test.go
// +build linux package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "encoding/json" "io/ioutil" "strings" "testing" "github.com/opencontainers/runtime-spec/specs-go" "gotest.tools/v3/assert" ) func TestLoadProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/example.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } var expectedErrno uint = 12345 expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Syscalls: []specs.LinuxSyscall{ { Names: []string{"clone"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{{ Index: 0, Value: 2114060288, ValueTwo: 0, Op: specs.OpMaskedEqual, }}, }, { Names: []string{"open"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"close"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"syslog"}, Action: specs.ActErrno, ErrnoRet: &expectedErrno, Args: []specs.LinuxSeccompArg{}, }, }, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithDefaultErrnoRet(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 6 }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expectedErrnoRet := uint(6) expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedErrnoRet, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithListenerPath(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "listenerPath": "/var/run/seccompaget.sock", "listenerMetadata": "opaque-metadata" }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, ListenerPath: "/var/run/seccompaget.sock", ListenerMetadata: "opaque-metadata", } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithFlag(t *testing.T) { profile := `{"defaultAction": "SCMP_ACT_ERRNO", "flags": ["SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"]}` expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Flags: []specs.LinuxSeccompFlag{"SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"}, } rs := createSpec() p, err := LoadProfile(profile, &rs) assert.NilError(t, err) assert.DeepEqual(t, expected, *p) } // TestLoadProfileValidation tests that invalid profiles produce the correct error. func TestLoadProfileValidation(t *testing.T) { tests := []struct { doc string profile string expected string }{ { doc: "conflicting architectures and archMap", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "architectures": ["A", "B", "C"], "archMap": [{"architecture": "A", "subArchitectures": ["B", "C"]}]}`, expected: `use either 'architectures' or 'archMap'`, }, { doc: "conflicting syscall.name and syscall.names", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "syscalls": [{"name": "accept", "names": ["accept"], "action": "SCMP_ACT_ALLOW"}]}`, expected: `use either 'name' or 'names'`, }, } for _, tc := range tests { tc := tc rs := createSpec() t.Run(tc.doc, func(t *testing.T) { _, err := LoadProfile(tc.profile, &rs) assert.ErrorContains(t, err, tc.expected) }) } } // TestLoadLegacyProfile tests loading a seccomp profile in the old format // (before https://github.com/docker/docker/pull/24510) func TestLoadLegacyProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/default-old-format.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) assert.NilError(t, err) assert.Equal(t, p.DefaultAction, specs.ActErrno) assert.DeepEqual(t, p.Architectures, []specs.Arch{"SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"}) assert.Equal(t, len(p.Syscalls), 311) expected := specs.LinuxSyscall{ Names: []string{"accept"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, } assert.DeepEqual(t, p.Syscalls[0], expected) } func TestLoadDefaultProfile(t *testing.T) { f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } rs := createSpec() if _, err := LoadProfile(string(f), &rs); err != nil { t.Fatal(err) } } func TestUnmarshalDefaultProfile(t *testing.T) { expected := DefaultProfile() if expected == nil { t.Skip("seccomp not supported") } f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } var profile Seccomp err = json.Unmarshal(f, &profile) if err != nil { t.Fatal(err) } assert.DeepEqual(t, expected.Architectures, profile.Architectures) assert.DeepEqual(t, expected.ArchMap, profile.ArchMap) assert.DeepEqual(t, expected.DefaultAction, profile.DefaultAction) assert.DeepEqual(t, expected.Syscalls, profile.Syscalls) } func TestMarshalUnmarshalFilter(t *testing.T) { t.Parallel() tests := []struct { in string out string error bool }{ {in: `{"arches":["s390x"],"minKernel":3}`, error: true}, {in: `{"arches":["s390x"],"minKernel":3.12}`, error: true}, {in: `{"arches":["s390x"],"minKernel":true}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"0.0"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":".3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3."}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"true"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3.12.1\""}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"4.15abc"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":null}`, out: `{"arches":["s390x"]}`}, {in: `{"arches":["s390x"],"minKernel":""}`, out: `{"arches":["s390x"],"minKernel":""}`}, // FIXME: try to fix omitempty for this {in: `{"arches":["s390x"],"minKernel":"0.5"}`, out: `{"arches":["s390x"],"minKernel":"0.5"}`}, {in: `{"arches":["s390x"],"minKernel":"0.50"}`, out: `{"arches":["s390x"],"minKernel":"0.50"}`}, {in: `{"arches":["s390x"],"minKernel":"5.0"}`, out: `{"arches":["s390x"],"minKernel":"5.0"}`}, {in: `{"arches":["s390x"],"minKernel":"50.0"}`, out: `{"arches":["s390x"],"minKernel":"50.0"}`}, {in: `{"arches":["s390x"],"minKernel":"4.15"}`, out: `{"arches":["s390x"],"minKernel":"4.15"}`}, } for _, tc := range tests { tc := tc t.Run(tc.in, func(t *testing.T) { var filter Filter err := json.Unmarshal([]byte(tc.in), &filter) if tc.error { if err == nil { t.Fatal("expected an error") } else if !strings.Contains(err.Error(), "invalid kernel version") { t.Fatal("unexpected error:", err) } return } if err != nil { t.Fatal(err) } out, err := json.Marshal(filter) if err != nil { t.Fatal(err) } if string(out) != tc.out { t.Fatalf("expected %s, got %s", tc.out, string(out)) } }) } } func TestLoadConditional(t *testing.T) { f, err := ioutil.ReadFile("fixtures/conditional_include.json") if err != nil { t.Fatal(err) } tests := []struct { doc string cap string expected []string }{ {doc: "no caps", expected: []string{"chmod", "ptrace"}}, {doc: "with syslog", cap: "CAP_SYSLOG", expected: []string{"chmod", "syslog", "ptrace"}}, {doc: "no ptrace", cap: "CAP_SYS_ADMIN", expected: []string{"chmod"}}, } for _, tc := range tests { tc := tc t.Run(tc.doc, func(t *testing.T) { rs := createSpec(tc.cap) p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } if len(p.Syscalls) != len(tc.expected) { t.Fatalf("expected %d syscalls in profile, have %d", len(tc.expected), len(p.Syscalls)) } for i, v := range p.Syscalls { if v.Names[0] != tc.expected[i] { t.Fatalf("expected %s syscall, have %s", tc.expected[i], v.Names[0]) } } }) } } // createSpec() creates a minimum spec for testing func createSpec(caps ...string) specs.Spec { rs := specs.Spec{ Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{}, }, } if caps != nil { rs.Process.Capabilities.Bounding = append(rs.Process.Capabilities.Bounding, caps...) } return rs }
// +build linux package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "encoding/json" "io/ioutil" "strings" "testing" "github.com/opencontainers/runtime-spec/specs-go" "gotest.tools/v3/assert" ) func TestLoadProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/example.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } var expectedErrno uint = 12345 var expectedDefaultErrno uint = 1 expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedDefaultErrno, Syscalls: []specs.LinuxSyscall{ { Names: []string{"clone"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{{ Index: 0, Value: 2114060288, ValueTwo: 0, Op: specs.OpMaskedEqual, }}, }, { Names: []string{"open"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"close"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"syslog"}, Action: specs.ActErrno, ErrnoRet: &expectedErrno, Args: []specs.LinuxSeccompArg{}, }, }, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithDefaultErrnoRet(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 6 }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expectedErrnoRet := uint(6) expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedErrnoRet, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithListenerPath(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "listenerPath": "/var/run/seccompaget.sock", "listenerMetadata": "opaque-metadata" }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, ListenerPath: "/var/run/seccompaget.sock", ListenerMetadata: "opaque-metadata", } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithFlag(t *testing.T) { profile := `{"defaultAction": "SCMP_ACT_ERRNO", "flags": ["SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"]}` expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Flags: []specs.LinuxSeccompFlag{"SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"}, } rs := createSpec() p, err := LoadProfile(profile, &rs) assert.NilError(t, err) assert.DeepEqual(t, expected, *p) } // TestLoadProfileValidation tests that invalid profiles produce the correct error. func TestLoadProfileValidation(t *testing.T) { tests := []struct { doc string profile string expected string }{ { doc: "conflicting architectures and archMap", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "architectures": ["A", "B", "C"], "archMap": [{"architecture": "A", "subArchitectures": ["B", "C"]}]}`, expected: `use either 'architectures' or 'archMap'`, }, { doc: "conflicting syscall.name and syscall.names", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "syscalls": [{"name": "accept", "names": ["accept"], "action": "SCMP_ACT_ALLOW"}]}`, expected: `use either 'name' or 'names'`, }, } for _, tc := range tests { tc := tc rs := createSpec() t.Run(tc.doc, func(t *testing.T) { _, err := LoadProfile(tc.profile, &rs) assert.ErrorContains(t, err, tc.expected) }) } } // TestLoadLegacyProfile tests loading a seccomp profile in the old format // (before https://github.com/docker/docker/pull/24510) func TestLoadLegacyProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/default-old-format.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) assert.NilError(t, err) assert.Equal(t, p.DefaultAction, specs.ActErrno) assert.DeepEqual(t, p.Architectures, []specs.Arch{"SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"}) assert.Equal(t, len(p.Syscalls), 311) expected := specs.LinuxSyscall{ Names: []string{"accept"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, } assert.DeepEqual(t, p.Syscalls[0], expected) } func TestLoadDefaultProfile(t *testing.T) { f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } rs := createSpec() if _, err := LoadProfile(string(f), &rs); err != nil { t.Fatal(err) } } func TestUnmarshalDefaultProfile(t *testing.T) { expected := DefaultProfile() if expected == nil { t.Skip("seccomp not supported") } f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } var profile Seccomp err = json.Unmarshal(f, &profile) if err != nil { t.Fatal(err) } assert.DeepEqual(t, expected.Architectures, profile.Architectures) assert.DeepEqual(t, expected.ArchMap, profile.ArchMap) assert.DeepEqual(t, expected.DefaultAction, profile.DefaultAction) assert.DeepEqual(t, expected.Syscalls, profile.Syscalls) } func TestMarshalUnmarshalFilter(t *testing.T) { t.Parallel() tests := []struct { in string out string error bool }{ {in: `{"arches":["s390x"],"minKernel":3}`, error: true}, {in: `{"arches":["s390x"],"minKernel":3.12}`, error: true}, {in: `{"arches":["s390x"],"minKernel":true}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"0.0"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":".3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3."}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"true"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3.12.1\""}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"4.15abc"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":null}`, out: `{"arches":["s390x"]}`}, {in: `{"arches":["s390x"],"minKernel":""}`, out: `{"arches":["s390x"],"minKernel":""}`}, // FIXME: try to fix omitempty for this {in: `{"arches":["s390x"],"minKernel":"0.5"}`, out: `{"arches":["s390x"],"minKernel":"0.5"}`}, {in: `{"arches":["s390x"],"minKernel":"0.50"}`, out: `{"arches":["s390x"],"minKernel":"0.50"}`}, {in: `{"arches":["s390x"],"minKernel":"5.0"}`, out: `{"arches":["s390x"],"minKernel":"5.0"}`}, {in: `{"arches":["s390x"],"minKernel":"50.0"}`, out: `{"arches":["s390x"],"minKernel":"50.0"}`}, {in: `{"arches":["s390x"],"minKernel":"4.15"}`, out: `{"arches":["s390x"],"minKernel":"4.15"}`}, } for _, tc := range tests { tc := tc t.Run(tc.in, func(t *testing.T) { var filter Filter err := json.Unmarshal([]byte(tc.in), &filter) if tc.error { if err == nil { t.Fatal("expected an error") } else if !strings.Contains(err.Error(), "invalid kernel version") { t.Fatal("unexpected error:", err) } return } if err != nil { t.Fatal(err) } out, err := json.Marshal(filter) if err != nil { t.Fatal(err) } if string(out) != tc.out { t.Fatalf("expected %s, got %s", tc.out, string(out)) } }) } } func TestLoadConditional(t *testing.T) { f, err := ioutil.ReadFile("fixtures/conditional_include.json") if err != nil { t.Fatal(err) } tests := []struct { doc string cap string expected []string }{ {doc: "no caps", expected: []string{"chmod", "ptrace"}}, {doc: "with syslog", cap: "CAP_SYSLOG", expected: []string{"chmod", "syslog", "ptrace"}}, {doc: "no ptrace", cap: "CAP_SYS_ADMIN", expected: []string{"chmod"}}, } for _, tc := range tests { tc := tc t.Run(tc.doc, func(t *testing.T) { rs := createSpec(tc.cap) p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } if len(p.Syscalls) != len(tc.expected) { t.Fatalf("expected %d syscalls in profile, have %d", len(tc.expected), len(p.Syscalls)) } for i, v := range p.Syscalls { if v.Names[0] != tc.expected[i] { t.Fatalf("expected %s syscall, have %s", tc.expected[i], v.Names[0]) } } }) } } // createSpec() creates a minimum spec for testing func createSpec(caps ...string) specs.Spec { rs := specs.Spec{ Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{}, }, } if caps != nil { rs.Process.Capabilities.Bounding = append(rs.Process.Capabilities.Bounding, caps...) } return rs }
rata
7672963eec42e65e045f6eb745f5fe0682df5434
2480bebf59c25991ef88ba5229c7a2e65237510c
Reason I was looking at it was that (but haven't checked all other tests), I felt it was good to keep a test that loads it from the JSON profile (`TestLoadProfile` constructs the config in Go).
thaJeztah
4,541
moby/moby
42,649
seccomp: Use explicit DefaultErrnoRet
Since commit "seccomp: Sync fields with runtime-spec fields" (5d244675bdb23e8fce427036c03517243f344cd4) we support to specify the DefaultErrnoRet to be used. Before that commit it was not specified and EPERM was used by default. This commit keeps the same behaviour but just makes it explicit that the default is EPERM. Signed-off-by: Rodrigo Campos <[email protected]> This a follow-up of https://github.com/moby/moby/pull/42604#issuecomment-876632771, as suggested by @thaJeztah. This just adds an explicit default to EPERM. Right now the default is EPERM but is implicit. <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Added the field `DefaultErrnoRet` to seccomp profiles **- How I did it** Added the field to default profile in `profiles/seccomp/default_linux.go`, then adjusted the tests to expect this new field **- How to verify it** You can run unit tests with: `TESTDIRS='github.com/docker/docker/profiles/seccomp' make test-unit`. You can also run `hack/validate/default-seccomp` to verify nothing was missing in the changes. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> Add an explicit DefaultErrnoRet field in the default seccomp profile. No behavior change. **- A picture of a cute animal (not mandatory but encouraged)** ![](https://www.kinderundjugendmedien.de/images/Ratatouille_pixar.jpg)
null
2021-07-16 16:36:59+00:00
2021-08-03 13:13:42+00:00
profiles/seccomp/seccomp_test.go
// +build linux package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "encoding/json" "io/ioutil" "strings" "testing" "github.com/opencontainers/runtime-spec/specs-go" "gotest.tools/v3/assert" ) func TestLoadProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/example.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } var expectedErrno uint = 12345 expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Syscalls: []specs.LinuxSyscall{ { Names: []string{"clone"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{{ Index: 0, Value: 2114060288, ValueTwo: 0, Op: specs.OpMaskedEqual, }}, }, { Names: []string{"open"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"close"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"syslog"}, Action: specs.ActErrno, ErrnoRet: &expectedErrno, Args: []specs.LinuxSeccompArg{}, }, }, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithDefaultErrnoRet(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 6 }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expectedErrnoRet := uint(6) expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedErrnoRet, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithListenerPath(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "listenerPath": "/var/run/seccompaget.sock", "listenerMetadata": "opaque-metadata" }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, ListenerPath: "/var/run/seccompaget.sock", ListenerMetadata: "opaque-metadata", } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithFlag(t *testing.T) { profile := `{"defaultAction": "SCMP_ACT_ERRNO", "flags": ["SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"]}` expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Flags: []specs.LinuxSeccompFlag{"SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"}, } rs := createSpec() p, err := LoadProfile(profile, &rs) assert.NilError(t, err) assert.DeepEqual(t, expected, *p) } // TestLoadProfileValidation tests that invalid profiles produce the correct error. func TestLoadProfileValidation(t *testing.T) { tests := []struct { doc string profile string expected string }{ { doc: "conflicting architectures and archMap", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "architectures": ["A", "B", "C"], "archMap": [{"architecture": "A", "subArchitectures": ["B", "C"]}]}`, expected: `use either 'architectures' or 'archMap'`, }, { doc: "conflicting syscall.name and syscall.names", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "syscalls": [{"name": "accept", "names": ["accept"], "action": "SCMP_ACT_ALLOW"}]}`, expected: `use either 'name' or 'names'`, }, } for _, tc := range tests { tc := tc rs := createSpec() t.Run(tc.doc, func(t *testing.T) { _, err := LoadProfile(tc.profile, &rs) assert.ErrorContains(t, err, tc.expected) }) } } // TestLoadLegacyProfile tests loading a seccomp profile in the old format // (before https://github.com/docker/docker/pull/24510) func TestLoadLegacyProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/default-old-format.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) assert.NilError(t, err) assert.Equal(t, p.DefaultAction, specs.ActErrno) assert.DeepEqual(t, p.Architectures, []specs.Arch{"SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"}) assert.Equal(t, len(p.Syscalls), 311) expected := specs.LinuxSyscall{ Names: []string{"accept"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, } assert.DeepEqual(t, p.Syscalls[0], expected) } func TestLoadDefaultProfile(t *testing.T) { f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } rs := createSpec() if _, err := LoadProfile(string(f), &rs); err != nil { t.Fatal(err) } } func TestUnmarshalDefaultProfile(t *testing.T) { expected := DefaultProfile() if expected == nil { t.Skip("seccomp not supported") } f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } var profile Seccomp err = json.Unmarshal(f, &profile) if err != nil { t.Fatal(err) } assert.DeepEqual(t, expected.Architectures, profile.Architectures) assert.DeepEqual(t, expected.ArchMap, profile.ArchMap) assert.DeepEqual(t, expected.DefaultAction, profile.DefaultAction) assert.DeepEqual(t, expected.Syscalls, profile.Syscalls) } func TestMarshalUnmarshalFilter(t *testing.T) { t.Parallel() tests := []struct { in string out string error bool }{ {in: `{"arches":["s390x"],"minKernel":3}`, error: true}, {in: `{"arches":["s390x"],"minKernel":3.12}`, error: true}, {in: `{"arches":["s390x"],"minKernel":true}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"0.0"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":".3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3."}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"true"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3.12.1\""}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"4.15abc"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":null}`, out: `{"arches":["s390x"]}`}, {in: `{"arches":["s390x"],"minKernel":""}`, out: `{"arches":["s390x"],"minKernel":""}`}, // FIXME: try to fix omitempty for this {in: `{"arches":["s390x"],"minKernel":"0.5"}`, out: `{"arches":["s390x"],"minKernel":"0.5"}`}, {in: `{"arches":["s390x"],"minKernel":"0.50"}`, out: `{"arches":["s390x"],"minKernel":"0.50"}`}, {in: `{"arches":["s390x"],"minKernel":"5.0"}`, out: `{"arches":["s390x"],"minKernel":"5.0"}`}, {in: `{"arches":["s390x"],"minKernel":"50.0"}`, out: `{"arches":["s390x"],"minKernel":"50.0"}`}, {in: `{"arches":["s390x"],"minKernel":"4.15"}`, out: `{"arches":["s390x"],"minKernel":"4.15"}`}, } for _, tc := range tests { tc := tc t.Run(tc.in, func(t *testing.T) { var filter Filter err := json.Unmarshal([]byte(tc.in), &filter) if tc.error { if err == nil { t.Fatal("expected an error") } else if !strings.Contains(err.Error(), "invalid kernel version") { t.Fatal("unexpected error:", err) } return } if err != nil { t.Fatal(err) } out, err := json.Marshal(filter) if err != nil { t.Fatal(err) } if string(out) != tc.out { t.Fatalf("expected %s, got %s", tc.out, string(out)) } }) } } func TestLoadConditional(t *testing.T) { f, err := ioutil.ReadFile("fixtures/conditional_include.json") if err != nil { t.Fatal(err) } tests := []struct { doc string cap string expected []string }{ {doc: "no caps", expected: []string{"chmod", "ptrace"}}, {doc: "with syslog", cap: "CAP_SYSLOG", expected: []string{"chmod", "syslog", "ptrace"}}, {doc: "no ptrace", cap: "CAP_SYS_ADMIN", expected: []string{"chmod"}}, } for _, tc := range tests { tc := tc t.Run(tc.doc, func(t *testing.T) { rs := createSpec(tc.cap) p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } if len(p.Syscalls) != len(tc.expected) { t.Fatalf("expected %d syscalls in profile, have %d", len(tc.expected), len(p.Syscalls)) } for i, v := range p.Syscalls { if v.Names[0] != tc.expected[i] { t.Fatalf("expected %s syscall, have %s", tc.expected[i], v.Names[0]) } } }) } } // createSpec() creates a minimum spec for testing func createSpec(caps ...string) specs.Spec { rs := specs.Spec{ Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{}, }, } if caps != nil { rs.Process.Capabilities.Bounding = append(rs.Process.Capabilities.Bounding, caps...) } return rs }
// +build linux package seccomp // import "github.com/docker/docker/profiles/seccomp" import ( "encoding/json" "io/ioutil" "strings" "testing" "github.com/opencontainers/runtime-spec/specs-go" "gotest.tools/v3/assert" ) func TestLoadProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/example.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } var expectedErrno uint = 12345 var expectedDefaultErrno uint = 1 expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedDefaultErrno, Syscalls: []specs.LinuxSyscall{ { Names: []string{"clone"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{{ Index: 0, Value: 2114060288, ValueTwo: 0, Op: specs.OpMaskedEqual, }}, }, { Names: []string{"open"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"close"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }, { Names: []string{"syslog"}, Action: specs.ActErrno, ErrnoRet: &expectedErrno, Args: []specs.LinuxSeccompArg{}, }, }, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithDefaultErrnoRet(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 6 }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expectedErrnoRet := uint(6) expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, DefaultErrnoRet: &expectedErrnoRet, } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithListenerPath(t *testing.T) { var profile = []byte(`{ "defaultAction": "SCMP_ACT_ERRNO", "listenerPath": "/var/run/seccompaget.sock", "listenerMetadata": "opaque-metadata" }`) rs := createSpec() p, err := LoadProfile(string(profile), &rs) if err != nil { t.Fatal(err) } expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, ListenerPath: "/var/run/seccompaget.sock", ListenerMetadata: "opaque-metadata", } assert.DeepEqual(t, expected, *p) } func TestLoadProfileWithFlag(t *testing.T) { profile := `{"defaultAction": "SCMP_ACT_ERRNO", "flags": ["SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"]}` expected := specs.LinuxSeccomp{ DefaultAction: specs.ActErrno, Flags: []specs.LinuxSeccompFlag{"SECCOMP_FILTER_FLAG_SPEC_ALLOW", "SECCOMP_FILTER_FLAG_LOG"}, } rs := createSpec() p, err := LoadProfile(profile, &rs) assert.NilError(t, err) assert.DeepEqual(t, expected, *p) } // TestLoadProfileValidation tests that invalid profiles produce the correct error. func TestLoadProfileValidation(t *testing.T) { tests := []struct { doc string profile string expected string }{ { doc: "conflicting architectures and archMap", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "architectures": ["A", "B", "C"], "archMap": [{"architecture": "A", "subArchitectures": ["B", "C"]}]}`, expected: `use either 'architectures' or 'archMap'`, }, { doc: "conflicting syscall.name and syscall.names", profile: `{"defaultAction": "SCMP_ACT_ERRNO", "syscalls": [{"name": "accept", "names": ["accept"], "action": "SCMP_ACT_ALLOW"}]}`, expected: `use either 'name' or 'names'`, }, } for _, tc := range tests { tc := tc rs := createSpec() t.Run(tc.doc, func(t *testing.T) { _, err := LoadProfile(tc.profile, &rs) assert.ErrorContains(t, err, tc.expected) }) } } // TestLoadLegacyProfile tests loading a seccomp profile in the old format // (before https://github.com/docker/docker/pull/24510) func TestLoadLegacyProfile(t *testing.T) { f, err := ioutil.ReadFile("fixtures/default-old-format.json") if err != nil { t.Fatal(err) } rs := createSpec() p, err := LoadProfile(string(f), &rs) assert.NilError(t, err) assert.Equal(t, p.DefaultAction, specs.ActErrno) assert.DeepEqual(t, p.Architectures, []specs.Arch{"SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"}) assert.Equal(t, len(p.Syscalls), 311) expected := specs.LinuxSyscall{ Names: []string{"accept"}, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, } assert.DeepEqual(t, p.Syscalls[0], expected) } func TestLoadDefaultProfile(t *testing.T) { f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } rs := createSpec() if _, err := LoadProfile(string(f), &rs); err != nil { t.Fatal(err) } } func TestUnmarshalDefaultProfile(t *testing.T) { expected := DefaultProfile() if expected == nil { t.Skip("seccomp not supported") } f, err := ioutil.ReadFile("default.json") if err != nil { t.Fatal(err) } var profile Seccomp err = json.Unmarshal(f, &profile) if err != nil { t.Fatal(err) } assert.DeepEqual(t, expected.Architectures, profile.Architectures) assert.DeepEqual(t, expected.ArchMap, profile.ArchMap) assert.DeepEqual(t, expected.DefaultAction, profile.DefaultAction) assert.DeepEqual(t, expected.Syscalls, profile.Syscalls) } func TestMarshalUnmarshalFilter(t *testing.T) { t.Parallel() tests := []struct { in string out string error bool }{ {in: `{"arches":["s390x"],"minKernel":3}`, error: true}, {in: `{"arches":["s390x"],"minKernel":3.12}`, error: true}, {in: `{"arches":["s390x"],"minKernel":true}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"0.0"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":".3"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3."}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"true"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"3.12.1\""}`, error: true}, {in: `{"arches":["s390x"],"minKernel":"4.15abc"}`, error: true}, {in: `{"arches":["s390x"],"minKernel":null}`, out: `{"arches":["s390x"]}`}, {in: `{"arches":["s390x"],"minKernel":""}`, out: `{"arches":["s390x"],"minKernel":""}`}, // FIXME: try to fix omitempty for this {in: `{"arches":["s390x"],"minKernel":"0.5"}`, out: `{"arches":["s390x"],"minKernel":"0.5"}`}, {in: `{"arches":["s390x"],"minKernel":"0.50"}`, out: `{"arches":["s390x"],"minKernel":"0.50"}`}, {in: `{"arches":["s390x"],"minKernel":"5.0"}`, out: `{"arches":["s390x"],"minKernel":"5.0"}`}, {in: `{"arches":["s390x"],"minKernel":"50.0"}`, out: `{"arches":["s390x"],"minKernel":"50.0"}`}, {in: `{"arches":["s390x"],"minKernel":"4.15"}`, out: `{"arches":["s390x"],"minKernel":"4.15"}`}, } for _, tc := range tests { tc := tc t.Run(tc.in, func(t *testing.T) { var filter Filter err := json.Unmarshal([]byte(tc.in), &filter) if tc.error { if err == nil { t.Fatal("expected an error") } else if !strings.Contains(err.Error(), "invalid kernel version") { t.Fatal("unexpected error:", err) } return } if err != nil { t.Fatal(err) } out, err := json.Marshal(filter) if err != nil { t.Fatal(err) } if string(out) != tc.out { t.Fatalf("expected %s, got %s", tc.out, string(out)) } }) } } func TestLoadConditional(t *testing.T) { f, err := ioutil.ReadFile("fixtures/conditional_include.json") if err != nil { t.Fatal(err) } tests := []struct { doc string cap string expected []string }{ {doc: "no caps", expected: []string{"chmod", "ptrace"}}, {doc: "with syslog", cap: "CAP_SYSLOG", expected: []string{"chmod", "syslog", "ptrace"}}, {doc: "no ptrace", cap: "CAP_SYS_ADMIN", expected: []string{"chmod"}}, } for _, tc := range tests { tc := tc t.Run(tc.doc, func(t *testing.T) { rs := createSpec(tc.cap) p, err := LoadProfile(string(f), &rs) if err != nil { t.Fatal(err) } if len(p.Syscalls) != len(tc.expected) { t.Fatalf("expected %d syscalls in profile, have %d", len(tc.expected), len(p.Syscalls)) } for i, v := range p.Syscalls { if v.Names[0] != tc.expected[i] { t.Fatalf("expected %s syscall, have %s", tc.expected[i], v.Names[0]) } } }) } } // createSpec() creates a minimum spec for testing func createSpec(caps ...string) specs.Spec { rs := specs.Spec{ Process: &specs.Process{ Capabilities: &specs.LinuxCapabilities{}, }, } if caps != nil { rs.Process.Capabilities.Bounding = append(rs.Process.Capabilities.Bounding, caps...) } return rs }
rata
7672963eec42e65e045f6eb745f5fe0682df5434
2480bebf59c25991ef88ba5229c7a2e65237510c
Ohh, I see. So, I left this test and also left it `TestLoadProfile`. Let me know if you prefer in some other way :) (will be visible in the next push)
rata
4,542
moby/moby
42,634
Fix up vndr tooling
- Fix the error message in hack/validate/vendor to specify that hack/vendor.sh should be run instead of vndr. - Fix hack/vendor.sh to also match on Windows paths for the whitelist. This allows the script to be run on Windows via Git Bash. Signed-off-by: Kevin Parsons <[email protected]>
null
2021-07-13 16:47:15+00:00
2021-07-14 22:43:21+00:00
hack/vendor.sh
#!/usr/bin/env bash # This file is just wrapper around vndr (github.com/LK4D4/vndr) tool. # For updating dependencies you should change `vendor.conf` file in root of the # project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for # vndr usage. set -e if ! hash vndr; then echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH" exit 1 fi if [ $# -eq 0 ] || [ "$1" = "archive/tar" ]; then echo "update vendored copy of archive/tar" : "${GO_VERSION:=$(awk -F '[ =]' '$1 == "ARG" && $2 == "GO_VERSION" { print $3; exit }' ./Dockerfile)}" rm -rf vendor/archive mkdir -p ./vendor/archive/tar echo "downloading: https://golang.org/dl/go${GO_VERSION}.src.tar.gz" curl -fsSL "https://golang.org/dl/go${GO_VERSION}.src.tar.gz" \ | tar --extract --gzip --directory=vendor/archive/tar --strip-components=4 go/src/archive/tar patch --strip=4 --directory=vendor/archive/tar --input="$PWD/patches/0001-archive-tar-do-not-populate-user-group-names.patch" fi if [ $# -eq 0 ] || [ "$1" != "archive/tar" ]; then vndr -whitelist=^archive/tar "$@" fi
#!/usr/bin/env bash # This file is just wrapper around vndr (github.com/LK4D4/vndr) tool. # For updating dependencies you should change `vendor.conf` file in root of the # project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for # vndr usage. set -e if ! hash vndr; then echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH" exit 1 fi if [ $# -eq 0 ] || [ "$1" = "archive/tar" ]; then echo "update vendored copy of archive/tar" : "${GO_VERSION:=$(awk -F '[ =]' '$1 == "ARG" && $2 == "GO_VERSION" { print $3; exit }' ./Dockerfile)}" rm -rf vendor/archive mkdir -p ./vendor/archive/tar echo "downloading: https://golang.org/dl/go${GO_VERSION}.src.tar.gz" curl -fsSL "https://golang.org/dl/go${GO_VERSION}.src.tar.gz" \ | tar --extract --gzip --directory=vendor/archive/tar --strip-components=4 go/src/archive/tar patch --strip=4 --directory=vendor/archive/tar --input="$PWD/patches/0001-archive-tar-do-not-populate-user-group-names.patch" fi if [ $# -eq 0 ] || [ "$1" != "archive/tar" ]; then vndr -whitelist='^archive[/\\]tar' "$@" fi
kevpar
22d6671db2f0f73eca064fe8a2b5d9fc199f88a4
40502f49f66742a604bb7c24581e0e320db08622
Interesting; it's not matching on package name but on directory? (I assume this was failing on Windows?)
thaJeztah
4,543
moby/moby
42,634
Fix up vndr tooling
- Fix the error message in hack/validate/vendor to specify that hack/vendor.sh should be run instead of vndr. - Fix hack/vendor.sh to also match on Windows paths for the whitelist. This allows the script to be run on Windows via Git Bash. Signed-off-by: Kevin Parsons <[email protected]>
null
2021-07-13 16:47:15+00:00
2021-07-14 22:43:21+00:00
hack/vendor.sh
#!/usr/bin/env bash # This file is just wrapper around vndr (github.com/LK4D4/vndr) tool. # For updating dependencies you should change `vendor.conf` file in root of the # project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for # vndr usage. set -e if ! hash vndr; then echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH" exit 1 fi if [ $# -eq 0 ] || [ "$1" = "archive/tar" ]; then echo "update vendored copy of archive/tar" : "${GO_VERSION:=$(awk -F '[ =]' '$1 == "ARG" && $2 == "GO_VERSION" { print $3; exit }' ./Dockerfile)}" rm -rf vendor/archive mkdir -p ./vendor/archive/tar echo "downloading: https://golang.org/dl/go${GO_VERSION}.src.tar.gz" curl -fsSL "https://golang.org/dl/go${GO_VERSION}.src.tar.gz" \ | tar --extract --gzip --directory=vendor/archive/tar --strip-components=4 go/src/archive/tar patch --strip=4 --directory=vendor/archive/tar --input="$PWD/patches/0001-archive-tar-do-not-populate-user-group-names.patch" fi if [ $# -eq 0 ] || [ "$1" != "archive/tar" ]; then vndr -whitelist=^archive/tar "$@" fi
#!/usr/bin/env bash # This file is just wrapper around vndr (github.com/LK4D4/vndr) tool. # For updating dependencies you should change `vendor.conf` file in root of the # project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for # vndr usage. set -e if ! hash vndr; then echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH" exit 1 fi if [ $# -eq 0 ] || [ "$1" = "archive/tar" ]; then echo "update vendored copy of archive/tar" : "${GO_VERSION:=$(awk -F '[ =]' '$1 == "ARG" && $2 == "GO_VERSION" { print $3; exit }' ./Dockerfile)}" rm -rf vendor/archive mkdir -p ./vendor/archive/tar echo "downloading: https://golang.org/dl/go${GO_VERSION}.src.tar.gz" curl -fsSL "https://golang.org/dl/go${GO_VERSION}.src.tar.gz" \ | tar --extract --gzip --directory=vendor/archive/tar --strip-components=4 go/src/archive/tar patch --strip=4 --directory=vendor/archive/tar --input="$PWD/patches/0001-archive-tar-do-not-populate-user-group-names.patch" fi if [ $# -eq 0 ] || [ "$1" != "archive/tar" ]; then vndr -whitelist='^archive[/\\]tar' "$@" fi
kevpar
22d6671db2f0f73eca064fe8a2b5d9fc199f88a4
40502f49f66742a604bb7c24581e0e320db08622
Yeah, it looks like it's using the whitelist in two places: - `filepath.Walk` fed into checking the whitelist: https://github.com/LK4D4/vndr/blob/master/clean.go#L84-L98 - `ioutil.ReadDir` fed into checking the whitelist: https://github.com/LK4D4/vndr/blob/master/clean.go#L142-L154 So sadly not a package path :(
kevpar
4,544
moby/moby
42,625
Fix flaky libnetwork/networkdb tests
**- What I did** Make `TestNetworkDBIslands` run faster and respect `-timeout` `go test` flag value (Partially?) address #42459 Fix `TestNetworkDBNodeJoinLeaveIteration` and `TestNetworkDBCRUDTableEntries` as well, since they now fail constantly and break CI. Potentially, that can be extracted into another PR, which this one would be blocked on, but I don't think it's worth it, since the changes are minimal, let me know if you think otherwise. **- How I did it** - Make rejoin intervals configurable, specify a lower interval during test and account for `test.Deadline` value - Update `github.com/hashicorp/memberlist`. Note that currently used version dates back to 2017. There have been a few improvements and bugfixes made since, in particular, the logic handling left/failed nodes was improved. **- How to verify it** I believe this commit is of most interest https://github.com/hashicorp/memberlist/commit/237d410aa2bf83254678ef78dd638480780e54a2, but I have not made a thorough analysis, whether this is indeed the "fix" we need. Note, I have encountered the following failure only once in ~100 runs: ``` === Failed === FAIL: libnetwork/networkdb TestNetworkDBIslands (22.03s) time="2021-07-12T15:12:17Z" level=info msg="New memberlist node - Node:node1 will use memberlist nodeID:f135e2881fc7 with config:&{NodeID:f135e2881fc7 Hostname:node1 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10001 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:17Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:17Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:17Z" level=info msg="New memberlist node - Node:node2 will use memberlist nodeID:12d1d234a539 with config:&{NodeID:12d1d234a539 Hostname:node2 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10002 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:17Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:17Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:17Z" level=info msg="The new bootstrap node list is:[localhost:10001]" time="2021-07-12T15:12:17Z" level=debug msg="memberlist: Stream connection from=[::1]:36282" time="2021-07-12T15:12:17Z" level=debug msg="memberlist: Initiating push/pull sync with: [::1]:10001" time="2021-07-12T15:12:17Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:17Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:17Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:17Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:17Z" level=debug msg="memberlist: Initiating push/pull sync with: 127.0.0.1:10001" time="2021-07-12T15:12:17Z" level=debug msg="memberlist: Stream connection from=127.0.0.1:46710" time="2021-07-12T15:12:18Z" level=info msg="New memberlist node - Node:node3 will use memberlist nodeID:e9889fb524ad with config:&{NodeID:e9889fb524ad Hostname:node3 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10003 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="The new bootstrap node list is:[localhost:10002]" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: [::1]:10002" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=[::1]:58118" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: 127.0.0.1:10002" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=127.0.0.1:43476" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="New memberlist node - Node:node4 will use memberlist nodeID:dd853f955091 with config:&{NodeID:dd853f955091 Hostname:node4 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10004 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="The new bootstrap node list is:[localhost:10003]" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: [::1]:10003" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=[::1]:39720" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: 127.0.0.1:10003" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=127.0.0.1:34386" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="New memberlist node - Node:node5 will use memberlist nodeID:644827c34b56 with config:&{NodeID:644827c34b56 Hostname:node5 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10005 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:18Z" level=info msg="Node 644827c34b56/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 644827c34b56/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="The new bootstrap node list is:[localhost:10004]" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: [::1]:10004" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=[::1]:35756" time="2021-07-12T15:12:18Z" level=info msg="Node 644827c34b56/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 644827c34b56/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: 127.0.0.1:10004" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=127.0.0.1:43650" time="2021-07-12T15:12:19Z" level=info msg="Node 644827c34b56/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:19Z" level=info msg="Node 644827c34b56/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:19Z" level=info msg="Node 644827c34b56/172.17.0.2, added to nodes list" time="2021-07-12T15:12:19Z" level=info msg="Node 644827c34b56/172.17.0.2, added to nodes list" time="2021-07-12T15:12:23Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:24Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:24Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:24Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:29Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:30Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:30Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:30Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:35Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:36Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:36Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:36Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" networkdb_test.go:839: timeout hit after 20s: node3:Waiting for cluser peers to be established ``` That might be just an unfortunate slowness of my system, but it may be another source of flakiness, which has to be investigated still. I propose not to close the original issue just yet, and wait for a few days/weeks to see if the test is still flaky and make the decision then. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-12 15:48:25+00:00
2021-07-19 13:35:07+00:00
libnetwork/networkdb/networkdb_test.go
package networkdb import ( "fmt" "io/ioutil" "log" "net" "os" "strconv" "sync/atomic" "testing" "time" "github.com/docker/docker/pkg/stringid" "github.com/docker/go-events" "github.com/hashicorp/memberlist" "github.com/sirupsen/logrus" "gotest.tools/v3/assert" is "gotest.tools/v3/assert/cmp" "gotest.tools/v3/poll" // this takes care of the incontainer flag _ "github.com/docker/docker/libnetwork/testutils" ) var dbPort int32 = 10000 func TestMain(m *testing.M) { ioutil.WriteFile("/proc/sys/net/ipv6/conf/lo/disable_ipv6", []byte{'0', '\n'}, 0644) logrus.SetLevel(logrus.ErrorLevel) os.Exit(m.Run()) } func launchNode(t *testing.T, conf Config) *NetworkDB { t.Helper() db, err := New(&conf) assert.NilError(t, err) return db } func createNetworkDBInstances(t *testing.T, num int, namePrefix string, conf *Config) []*NetworkDB { t.Helper() var dbs []*NetworkDB for i := 0; i < num; i++ { localConfig := *conf localConfig.Hostname = fmt.Sprintf("%s%d", namePrefix, i+1) localConfig.NodeID = stringid.TruncateID(stringid.GenerateRandomID()) localConfig.BindPort = int(atomic.AddInt32(&dbPort, 1)) db := launchNode(t, localConfig) if i != 0 { assert.Check(t, db.Join([]string{fmt.Sprintf("localhost:%d", db.config.BindPort-1)})) } dbs = append(dbs, db) } // Wait till the cluster creation is successful check := func(t poll.LogT) poll.Result { // Check that the cluster is properly created for i := 0; i < num; i++ { if num != len(dbs[i].ClusterPeers()) { return poll.Continue("%s:Waiting for cluser peers to be established", dbs[i].config.Hostname) } } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(2*time.Second), poll.WithTimeout(20*time.Second)) return dbs } func closeNetworkDBInstances(t *testing.T, dbs []*NetworkDB) { t.Helper() log.Print("Closing DB instances...") for _, db := range dbs { db.Close() } } func (db *NetworkDB) verifyNodeExistence(t *testing.T, node string, present bool) { t.Helper() for i := 0; i < 80; i++ { db.RLock() _, ok := db.nodes[node] db.RUnlock() if present && ok { return } if !present && !ok { return } time.Sleep(50 * time.Millisecond) } t.Errorf("%v(%v): Node existence verification for node %s failed", db.config.Hostname, db.config.NodeID, node) } func (db *NetworkDB) verifyNetworkExistence(t *testing.T, node string, id string, present bool) { t.Helper() for i := 0; i < 80; i++ { db.RLock() nn, nnok := db.networks[node] db.RUnlock() if nnok { n, ok := nn[id] if present && ok { return } if !present && ((ok && n.leaving) || !ok) { return } } time.Sleep(50 * time.Millisecond) } t.Error("Network existence verification failed") } func (db *NetworkDB) verifyEntryExistence(t *testing.T, tname, nid, key, value string, present bool) { t.Helper() n := 80 for i := 0; i < n; i++ { entry, err := db.getEntry(tname, nid, key) if present && err == nil && string(entry.value) == value { return } if !present && ((err == nil && entry.deleting) || (err != nil)) { return } if i == n-1 && !present && err != nil { return } time.Sleep(50 * time.Millisecond) } t.Errorf("Entry existence verification test failed for %v(%v)", db.config.Hostname, db.config.NodeID) } func testWatch(t *testing.T, ch chan events.Event, ev interface{}, tname, nid, key, value string) { t.Helper() select { case rcvdEv := <-ch: assert.Check(t, is.Equal(fmt.Sprintf("%T", rcvdEv), fmt.Sprintf("%T", ev))) switch typ := rcvdEv.(type) { case CreateEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) assert.Check(t, is.Equal(value, string(typ.Value))) case UpdateEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) assert.Check(t, is.Equal(value, string(typ.Value))) case DeleteEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) } case <-time.After(time.Second): t.Fail() return } } func TestNetworkDBSimple(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) closeNetworkDBInstances(t, dbs) } func TestNetworkDBJoinLeaveNetwork(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", false) closeNetworkDBInstances(t, dbs) } func TestNetworkDBJoinLeaveNetworks(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) n := 10 for i := 1; i <= n; i++ { err := dbs[0].JoinNetwork(fmt.Sprintf("network0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err := dbs[1].JoinNetwork(fmt.Sprintf("network1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, fmt.Sprintf("network0%d", i), true) } for i := 1; i <= n; i++ { dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, fmt.Sprintf("network1%d", i), true) } for i := 1; i <= n; i++ { err := dbs[0].LeaveNetwork(fmt.Sprintf("network0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err := dbs[1].LeaveNetwork(fmt.Sprintf("network1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, fmt.Sprintf("network0%d", i), false) } for i := 1; i <= n; i++ { dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, fmt.Sprintf("network1%d", i), false) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDTableEntry(t *testing.T) { dbs := createNetworkDBInstances(t, 3, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) dbs[2].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", false) err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_updated_value", true) err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "", false) closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDTableEntries(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) n := 10 for i := 1; i <= n; i++ { err = dbs[0].CreateEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i), []byte(fmt.Sprintf("test_value0%d", i))) assert.NilError(t, err) } for i := 1; i <= n; i++ { err = dbs[1].CreateEntry("test_table", "network1", fmt.Sprintf("test_key1%d", i), []byte(fmt.Sprintf("test_value1%d", i))) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[0].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key1%d", i), fmt.Sprintf("test_value1%d", i), true) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), fmt.Sprintf("test_value0%d", i), true) assert.NilError(t, err) } // Verify deletes for i := 1; i <= n; i++ { err = dbs[0].DeleteEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err = dbs[1].DeleteEntry("test_table", "network1", fmt.Sprintf("test_key1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[0].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key1%d", i), "", false) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), "", false) assert.NilError(t, err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBNodeLeave(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) dbs[0].Close() dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", false) dbs[1].Close() } func TestNetworkDBWatch(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) ch, cancel := dbs[1].Watch("", "", "") err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) testWatch(t, ch.C, CreateEvent{}, "test_table", "network1", "test_key", "test_value") err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) testWatch(t, ch.C, UpdateEvent{}, "test_table", "network1", "test_key", "test_updated_value") err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) testWatch(t, ch.C, DeleteEvent{}, "test_table", "network1", "test_key", "") cancel() closeNetworkDBInstances(t, dbs) } func TestNetworkDBBulkSync(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) n := 1000 for i := 1; i <= n; i++ { err = dbs[0].CreateEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i), []byte(fmt.Sprintf("test_value0%d", i))) assert.NilError(t, err) } err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, "network1", true) for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), fmt.Sprintf("test_value0%d", i), true) assert.NilError(t, err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDMediumCluster(t *testing.T) { n := 5 dbs := createNetworkDBInstances(t, n, "node", DefaultConfig()) for i := 0; i < n; i++ { for j := 0; j < n; j++ { if i == j { continue } dbs[i].verifyNodeExistence(t, dbs[j].config.NodeID, true) } } for i := 0; i < n; i++ { err := dbs[i].JoinNetwork("network1") assert.NilError(t, err) } for i := 0; i < n; i++ { for j := 0; j < n; j++ { dbs[i].verifyNetworkExistence(t, dbs[j].config.NodeID, "network1", true) } } err := dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) } err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_updated_value", true) } err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "", false) } for i := 1; i < n; i++ { _, err = dbs[i].GetEntry("test_table", "network1", "test_key") assert.Check(t, is.ErrorContains(err, "")) assert.Check(t, is.Contains(err.Error(), "deleted and pending garbage collection"), err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBNodeJoinLeaveIteration(t *testing.T) { maxRetry := 5 dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) // Single node Join/Leave err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) if len(dbs[0].networkNodes["network1"]) != 1 { t.Fatalf("The networkNodes list has to have be 1 instead of %d", len(dbs[0].networkNodes["network1"])) } err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) if len(dbs[0].networkNodes["network1"]) != 0 { t.Fatalf("The networkNodes list has to have be 0 instead of %d", len(dbs[0].networkNodes["network1"])) } // Multiple nodes Join/Leave err = dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) // Wait for the propagation on db[0] for i := 0; i < maxRetry; i++ { if len(dbs[0].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[0].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[0].networkNodes["network1"]), dbs[0].networkNodes["network1"]) } if n, ok := dbs[0].networks[dbs[0].config.NodeID]["network1"]; !ok || n.leaving { t.Fatalf("The network should not be marked as leaving:%t", n.leaving) } // Wait for the propagation on db[1] for i := 0; i < maxRetry; i++ { if len(dbs[1].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[1].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[1].networkNodes["network1"]), dbs[1].networkNodes["network1"]) } if n, ok := dbs[1].networks[dbs[1].config.NodeID]["network1"]; !ok || n.leaving { t.Fatalf("The network should not be marked as leaving:%t", n.leaving) } // Try a quick leave/join err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) err = dbs[0].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < maxRetry; i++ { if len(dbs[0].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[0].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[0].networkNodes["network1"]), dbs[0].networkNodes["network1"]) } for i := 0; i < maxRetry; i++ { if len(dbs[1].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[1].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[1].networkNodes["network1"]), dbs[1].networkNodes["network1"]) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBGarbageCollection(t *testing.T) { keysWriteDelete := 5 config := DefaultConfig() config.reapEntryInterval = 30 * time.Second config.StatsPrintPeriod = 15 * time.Second dbs := createNetworkDBInstances(t, 3, "node", config) // 2 Nodes join network err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < keysWriteDelete; i++ { err = dbs[i%2].CreateEntry("testTable", "network1", "key-"+strconv.Itoa(i), []byte("value")) assert.NilError(t, err) } time.Sleep(time.Second) for i := 0; i < keysWriteDelete; i++ { err = dbs[i%2].DeleteEntry("testTable", "network1", "key-"+strconv.Itoa(i)) assert.NilError(t, err) } for i := 0; i < 2; i++ { assert.Check(t, is.Equal(keysWriteDelete, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries number should match") } // from this point the timer for the garbage collection started, wait 5 seconds and then join a new node time.Sleep(5 * time.Second) err = dbs[2].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(keysWriteDelete, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries number should match") } // at this point the entries should had been all deleted time.Sleep(30 * time.Second) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(0, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries should had been garbage collected") } // make sure that entries are not coming back time.Sleep(15 * time.Second) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(0, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries should had been garbage collected") } closeNetworkDBInstances(t, dbs) } func TestFindNode(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["active"] = &node{Node: memberlist.Node{Name: "active"}} dbs[0].failedNodes["failed"] = &node{Node: memberlist.Node{Name: "failed"}} dbs[0].leftNodes["left"] = &node{Node: memberlist.Node{Name: "left"}} // active nodes is 2 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 2)) assert.Check(t, is.Len(dbs[0].failedNodes, 1)) assert.Check(t, is.Len(dbs[0].leftNodes, 1)) n, currState, m := dbs[0].findNode("active") assert.Check(t, n != nil) assert.Check(t, is.Equal("active", n.Name)) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, m != nil) // delete the entry manually delete(m, "active") // test if can be still find n, currState, m = dbs[0].findNode("active") assert.Check(t, is.Nil(n)) assert.Check(t, is.Equal(nodeNotFound, currState)) assert.Check(t, is.Nil(m)) n, currState, m = dbs[0].findNode("failed") assert.Check(t, n != nil) assert.Check(t, is.Equal("failed", n.Name)) assert.Check(t, is.Equal(nodeFailedState, currState)) assert.Check(t, m != nil) // find and remove n, currState, m = dbs[0].findNode("left") assert.Check(t, n != nil) assert.Check(t, is.Equal("left", n.Name)) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, m != nil) delete(m, "left") n, currState, m = dbs[0].findNode("left") assert.Check(t, is.Nil(n)) assert.Check(t, is.Equal(nodeNotFound, currState)) assert.Check(t, is.Nil(m)) closeNetworkDBInstances(t, dbs) } func TestChangeNodeState(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["node1"] = &node{Node: memberlist.Node{Name: "node1"}} dbs[0].nodes["node2"] = &node{Node: memberlist.Node{Name: "node2"}} dbs[0].nodes["node3"] = &node{Node: memberlist.Node{Name: "node3"}} // active nodes is 4 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 4)) n, currState, m := dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) // node1 to failed dbs[0].changeNodeState("node1", nodeFailedState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeFailedState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) // node1 back to active dbs[0].changeNodeState("node1", nodeActiveState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, is.Equal(time.Duration(0), n.reapTime)) // node1 to left dbs[0].changeNodeState("node1", nodeLeftState) dbs[0].changeNodeState("node2", nodeLeftState) dbs[0].changeNodeState("node3", nodeLeftState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) n, currState, m = dbs[0].findNode("node2") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node2", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) n, currState, m = dbs[0].findNode("node3") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node3", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) // active nodes is 1 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 1)) assert.Check(t, is.Len(dbs[0].failedNodes, 0)) assert.Check(t, is.Len(dbs[0].leftNodes, 3)) closeNetworkDBInstances(t, dbs) } func TestNodeReincarnation(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["node1"] = &node{Node: memberlist.Node{Name: "node1", Addr: net.ParseIP("192.168.1.1")}} dbs[0].leftNodes["node2"] = &node{Node: memberlist.Node{Name: "node2", Addr: net.ParseIP("192.168.1.2")}} dbs[0].failedNodes["node3"] = &node{Node: memberlist.Node{Name: "node3", Addr: net.ParseIP("192.168.1.3")}} // active nodes is 2 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 2)) assert.Check(t, is.Len(dbs[0].failedNodes, 1)) assert.Check(t, is.Len(dbs[0].leftNodes, 1)) b := dbs[0].purgeReincarnation(&memberlist.Node{Name: "node4", Addr: net.ParseIP("192.168.1.1")}) assert.Check(t, b) dbs[0].nodes["node4"] = &node{Node: memberlist.Node{Name: "node4", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node5", Addr: net.ParseIP("192.168.1.2")}) assert.Check(t, b) dbs[0].nodes["node5"] = &node{Node: memberlist.Node{Name: "node5", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.3")}) assert.Check(t, b) dbs[0].nodes["node6"] = &node{Node: memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.10")}) assert.Check(t, !b) // active nodes is 1 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 4)) assert.Check(t, is.Len(dbs[0].failedNodes, 0)) assert.Check(t, is.Len(dbs[0].leftNodes, 3)) closeNetworkDBInstances(t, dbs) } func TestParallelCreate(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) startCh := make(chan int) doneCh := make(chan error) var success int32 for i := 0; i < 20; i++ { go func() { <-startCh err := dbs[0].CreateEntry("testTable", "testNetwork", "key", []byte("value")) if err == nil { atomic.AddInt32(&success, 1) } doneCh <- err }() } close(startCh) for i := 0; i < 20; i++ { <-doneCh } close(doneCh) // Only 1 write should have succeeded assert.Check(t, is.Equal(int32(1), success)) closeNetworkDBInstances(t, dbs) } func TestParallelDelete(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) err := dbs[0].CreateEntry("testTable", "testNetwork", "key", []byte("value")) assert.NilError(t, err) startCh := make(chan int) doneCh := make(chan error) var success int32 for i := 0; i < 20; i++ { go func() { <-startCh err := dbs[0].DeleteEntry("testTable", "testNetwork", "key") if err == nil { atomic.AddInt32(&success, 1) } doneCh <- err }() } close(startCh) for i := 0; i < 20; i++ { <-doneCh } close(doneCh) // Only 1 write should have succeeded assert.Check(t, is.Equal(int32(1), success)) closeNetworkDBInstances(t, dbs) } func TestNetworkDBIslands(t *testing.T) { logrus.SetLevel(logrus.DebugLevel) dbs := createNetworkDBInstances(t, 5, "node", DefaultConfig()) // Get the node IP used currently node := dbs[0].nodes[dbs[0].config.NodeID] baseIPStr := node.Addr.String() // Node 0,1,2 are going to be the 3 bootstrap nodes members := []string{fmt.Sprintf("%s:%d", baseIPStr, dbs[0].config.BindPort), fmt.Sprintf("%s:%d", baseIPStr, dbs[1].config.BindPort), fmt.Sprintf("%s:%d", baseIPStr, dbs[2].config.BindPort)} // Rejoining will update the list of the bootstrap members for i := 3; i < 5; i++ { t.Logf("Re-joining: %d", i) assert.Check(t, dbs[i].Join(members)) } // Now the 3 bootstrap nodes will cleanly leave, and will be properly removed from the other 2 nodes for i := 0; i < 3; i++ { logrus.Infof("node %d leaving", i) dbs[i].Close() } checkDBs := make(map[string]*NetworkDB) for i := 3; i < 5; i++ { db := dbs[i] checkDBs[db.config.Hostname] = db } // Give some time to let the system propagate the messages and free up the ports check := func(t poll.LogT) poll.Result { // Verify that the nodes are actually all gone and marked appropiately for name, db := range checkDBs { db.RLock() if (len(db.leftNodes) != 3) || (len(db.failedNodes) != 0) { for name := range db.leftNodes { t.Logf("%s: Node %s left", db.config.Hostname, name) } for name := range db.failedNodes { t.Logf("%s: Node %s failed", db.config.Hostname, name) } db.RUnlock() return poll.Continue("%s:Waiting for all nodes to cleanly leave, left: %d, failed nodes: %d", name, len(db.leftNodes), len(db.failedNodes)) } db.RUnlock() t.Logf("%s: OK", name) delete(checkDBs, name) } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(time.Second), poll.WithTimeout(120*time.Second)) // Spawn again the first 3 nodes with different names but same IP:port for i := 0; i < 3; i++ { logrus.Infof("node %d coming back", i) dbs[i].config.NodeID = stringid.TruncateID(stringid.GenerateRandomID()) dbs[i] = launchNode(t, *dbs[i].config) } // Give some time for the reconnect routine to run, it runs every 60s check = func(t poll.LogT) poll.Result { // Verify that the cluster is again all connected. Note that the 3 previous node did not do any join for i := 0; i < 5; i++ { db := dbs[i] db.RLock() if len(db.nodes) != 5 { db.RUnlock() return poll.Continue("%s:Waiting to connect to all nodes", dbs[i].config.Hostname) } if len(db.failedNodes) != 0 { db.RUnlock() return poll.Continue("%s:Waiting for 0 failedNodes", dbs[i].config.Hostname) } if i < 3 { // nodes from 0 to 3 has no left nodes if len(db.leftNodes) != 0 { db.RUnlock() return poll.Continue("%s:Waiting to have no leftNodes", dbs[i].config.Hostname) } } else { // nodes from 4 to 5 has the 3 previous left nodes if len(db.leftNodes) != 3 { db.RUnlock() return poll.Continue("%s:Waiting to have 3 leftNodes", dbs[i].config.Hostname) } } db.RUnlock() } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(10*time.Second), poll.WithTimeout(120*time.Second)) closeNetworkDBInstances(t, dbs) }
package networkdb import ( "fmt" "io/ioutil" "log" "net" "os" "strconv" "sync/atomic" "testing" "time" "github.com/docker/docker/pkg/stringid" "github.com/docker/go-events" "github.com/hashicorp/memberlist" "github.com/sirupsen/logrus" "gotest.tools/v3/assert" is "gotest.tools/v3/assert/cmp" "gotest.tools/v3/poll" // this takes care of the incontainer flag _ "github.com/docker/docker/libnetwork/testutils" ) var dbPort int32 = 10000 func TestMain(m *testing.M) { ioutil.WriteFile("/proc/sys/net/ipv6/conf/lo/disable_ipv6", []byte{'0', '\n'}, 0644) logrus.SetLevel(logrus.ErrorLevel) os.Exit(m.Run()) } func launchNode(t *testing.T, conf Config) *NetworkDB { t.Helper() db, err := New(&conf) assert.NilError(t, err) return db } func createNetworkDBInstances(t *testing.T, num int, namePrefix string, conf *Config) []*NetworkDB { t.Helper() var dbs []*NetworkDB for i := 0; i < num; i++ { localConfig := *conf localConfig.Hostname = fmt.Sprintf("%s%d", namePrefix, i+1) localConfig.NodeID = stringid.TruncateID(stringid.GenerateRandomID()) localConfig.BindPort = int(atomic.AddInt32(&dbPort, 1)) db := launchNode(t, localConfig) if i != 0 { assert.Check(t, db.Join([]string{fmt.Sprintf("localhost:%d", db.config.BindPort-1)})) } dbs = append(dbs, db) } // Wait till the cluster creation is successful check := func(t poll.LogT) poll.Result { // Check that the cluster is properly created for i := 0; i < num; i++ { if num != len(dbs[i].ClusterPeers()) { return poll.Continue("%s:Waiting for cluser peers to be established", dbs[i].config.Hostname) } } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(2*time.Second), poll.WithTimeout(20*time.Second)) return dbs } func closeNetworkDBInstances(t *testing.T, dbs []*NetworkDB) { t.Helper() log.Print("Closing DB instances...") for _, db := range dbs { db.Close() } } func (db *NetworkDB) verifyNodeExistence(t *testing.T, node string, present bool) { t.Helper() for i := 0; i < 80; i++ { db.RLock() _, ok := db.nodes[node] db.RUnlock() if present && ok { return } if !present && !ok { return } time.Sleep(50 * time.Millisecond) } t.Errorf("%v(%v): Node existence verification for node %s failed", db.config.Hostname, db.config.NodeID, node) } func (db *NetworkDB) verifyNetworkExistence(t *testing.T, node string, id string, present bool) { t.Helper() for i := 0; i < 80; i++ { db.RLock() nn, nnok := db.networks[node] db.RUnlock() if nnok { n, ok := nn[id] if present && ok { return } if !present && ((ok && n.leaving) || !ok) { return } } time.Sleep(50 * time.Millisecond) } t.Error("Network existence verification failed") } func (db *NetworkDB) verifyEntryExistence(t *testing.T, tname, nid, key, value string, present bool) { t.Helper() n := 80 for i := 0; i < n; i++ { entry, err := db.getEntry(tname, nid, key) if present && err == nil && string(entry.value) == value { return } if !present && ((err == nil && entry.deleting) || (err != nil)) { return } if i == n-1 && !present && err != nil { return } time.Sleep(50 * time.Millisecond) } t.Errorf("Entry existence verification test failed for %v(%v)", db.config.Hostname, db.config.NodeID) } func testWatch(t *testing.T, ch chan events.Event, ev interface{}, tname, nid, key, value string) { t.Helper() select { case rcvdEv := <-ch: assert.Check(t, is.Equal(fmt.Sprintf("%T", rcvdEv), fmt.Sprintf("%T", ev))) switch typ := rcvdEv.(type) { case CreateEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) assert.Check(t, is.Equal(value, string(typ.Value))) case UpdateEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) assert.Check(t, is.Equal(value, string(typ.Value))) case DeleteEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) } case <-time.After(time.Second): t.Fail() return } } func TestNetworkDBSimple(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) closeNetworkDBInstances(t, dbs) } func TestNetworkDBJoinLeaveNetwork(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", false) closeNetworkDBInstances(t, dbs) } func TestNetworkDBJoinLeaveNetworks(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) n := 10 for i := 1; i <= n; i++ { err := dbs[0].JoinNetwork(fmt.Sprintf("network0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err := dbs[1].JoinNetwork(fmt.Sprintf("network1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, fmt.Sprintf("network0%d", i), true) } for i := 1; i <= n; i++ { dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, fmt.Sprintf("network1%d", i), true) } for i := 1; i <= n; i++ { err := dbs[0].LeaveNetwork(fmt.Sprintf("network0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err := dbs[1].LeaveNetwork(fmt.Sprintf("network1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, fmt.Sprintf("network0%d", i), false) } for i := 1; i <= n; i++ { dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, fmt.Sprintf("network1%d", i), false) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDTableEntry(t *testing.T) { dbs := createNetworkDBInstances(t, 3, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) dbs[2].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", false) err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_updated_value", true) err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "", false) closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDTableEntries(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, "network1", true) n := 10 for i := 1; i <= n; i++ { err = dbs[0].CreateEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i), []byte(fmt.Sprintf("test_value0%d", i))) assert.NilError(t, err) } for i := 1; i <= n; i++ { err = dbs[1].CreateEntry("test_table", "network1", fmt.Sprintf("test_key1%d", i), []byte(fmt.Sprintf("test_value1%d", i))) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[0].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key1%d", i), fmt.Sprintf("test_value1%d", i), true) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), fmt.Sprintf("test_value0%d", i), true) assert.NilError(t, err) } // Verify deletes for i := 1; i <= n; i++ { err = dbs[0].DeleteEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err = dbs[1].DeleteEntry("test_table", "network1", fmt.Sprintf("test_key1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[0].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key1%d", i), "", false) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), "", false) assert.NilError(t, err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBNodeLeave(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) dbs[0].Close() dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", false) dbs[1].Close() } func TestNetworkDBWatch(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) ch, cancel := dbs[1].Watch("", "", "") err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) testWatch(t, ch.C, CreateEvent{}, "test_table", "network1", "test_key", "test_value") err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) testWatch(t, ch.C, UpdateEvent{}, "test_table", "network1", "test_key", "test_updated_value") err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) testWatch(t, ch.C, DeleteEvent{}, "test_table", "network1", "test_key", "") cancel() closeNetworkDBInstances(t, dbs) } func TestNetworkDBBulkSync(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) n := 1000 for i := 1; i <= n; i++ { err = dbs[0].CreateEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i), []byte(fmt.Sprintf("test_value0%d", i))) assert.NilError(t, err) } err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, "network1", true) for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), fmt.Sprintf("test_value0%d", i), true) assert.NilError(t, err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDMediumCluster(t *testing.T) { n := 5 dbs := createNetworkDBInstances(t, n, "node", DefaultConfig()) for i := 0; i < n; i++ { for j := 0; j < n; j++ { if i == j { continue } dbs[i].verifyNodeExistence(t, dbs[j].config.NodeID, true) } } for i := 0; i < n; i++ { err := dbs[i].JoinNetwork("network1") assert.NilError(t, err) } for i := 0; i < n; i++ { for j := 0; j < n; j++ { dbs[i].verifyNetworkExistence(t, dbs[j].config.NodeID, "network1", true) } } err := dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) } err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_updated_value", true) } err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "", false) } for i := 1; i < n; i++ { _, err = dbs[i].GetEntry("test_table", "network1", "test_key") assert.Check(t, is.ErrorContains(err, "")) assert.Check(t, is.Contains(err.Error(), "deleted and pending garbage collection"), err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBNodeJoinLeaveIteration(t *testing.T) { maxRetry := 5 dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) // Single node Join/Leave err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) if len(dbs[0].networkNodes["network1"]) != 1 { t.Fatalf("The networkNodes list has to have be 1 instead of %d", len(dbs[0].networkNodes["network1"])) } err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) if len(dbs[0].networkNodes["network1"]) != 0 { t.Fatalf("The networkNodes list has to have be 0 instead of %d", len(dbs[0].networkNodes["network1"])) } // Multiple nodes Join/Leave err = dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) // Wait for the propagation on db[0] dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, "network1", true) if len(dbs[0].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[0].networkNodes["network1"]), dbs[0].networkNodes["network1"]) } if n, ok := dbs[0].networks[dbs[0].config.NodeID]["network1"]; !ok || n.leaving { t.Fatalf("The network should not be marked as leaving:%t", n.leaving) } // Wait for the propagation on db[1] dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) if len(dbs[1].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[1].networkNodes["network1"]), dbs[1].networkNodes["network1"]) } if n, ok := dbs[1].networks[dbs[1].config.NodeID]["network1"]; !ok || n.leaving { t.Fatalf("The network should not be marked as leaving:%t", n.leaving) } // Try a quick leave/join err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) err = dbs[0].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < maxRetry; i++ { if len(dbs[0].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[0].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[0].networkNodes["network1"]), dbs[0].networkNodes["network1"]) } for i := 0; i < maxRetry; i++ { if len(dbs[1].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[1].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[1].networkNodes["network1"]), dbs[1].networkNodes["network1"]) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBGarbageCollection(t *testing.T) { keysWriteDelete := 5 config := DefaultConfig() config.reapEntryInterval = 30 * time.Second config.StatsPrintPeriod = 15 * time.Second dbs := createNetworkDBInstances(t, 3, "node", config) // 2 Nodes join network err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < keysWriteDelete; i++ { err = dbs[i%2].CreateEntry("testTable", "network1", "key-"+strconv.Itoa(i), []byte("value")) assert.NilError(t, err) } time.Sleep(time.Second) for i := 0; i < keysWriteDelete; i++ { err = dbs[i%2].DeleteEntry("testTable", "network1", "key-"+strconv.Itoa(i)) assert.NilError(t, err) } for i := 0; i < 2; i++ { assert.Check(t, is.Equal(keysWriteDelete, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries number should match") } // from this point the timer for the garbage collection started, wait 5 seconds and then join a new node time.Sleep(5 * time.Second) err = dbs[2].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(keysWriteDelete, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries number should match") } // at this point the entries should had been all deleted time.Sleep(30 * time.Second) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(0, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries should had been garbage collected") } // make sure that entries are not coming back time.Sleep(15 * time.Second) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(0, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries should had been garbage collected") } closeNetworkDBInstances(t, dbs) } func TestFindNode(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["active"] = &node{Node: memberlist.Node{Name: "active"}} dbs[0].failedNodes["failed"] = &node{Node: memberlist.Node{Name: "failed"}} dbs[0].leftNodes["left"] = &node{Node: memberlist.Node{Name: "left"}} // active nodes is 2 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 2)) assert.Check(t, is.Len(dbs[0].failedNodes, 1)) assert.Check(t, is.Len(dbs[0].leftNodes, 1)) n, currState, m := dbs[0].findNode("active") assert.Check(t, n != nil) assert.Check(t, is.Equal("active", n.Name)) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, m != nil) // delete the entry manually delete(m, "active") // test if can be still find n, currState, m = dbs[0].findNode("active") assert.Check(t, is.Nil(n)) assert.Check(t, is.Equal(nodeNotFound, currState)) assert.Check(t, is.Nil(m)) n, currState, m = dbs[0].findNode("failed") assert.Check(t, n != nil) assert.Check(t, is.Equal("failed", n.Name)) assert.Check(t, is.Equal(nodeFailedState, currState)) assert.Check(t, m != nil) // find and remove n, currState, m = dbs[0].findNode("left") assert.Check(t, n != nil) assert.Check(t, is.Equal("left", n.Name)) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, m != nil) delete(m, "left") n, currState, m = dbs[0].findNode("left") assert.Check(t, is.Nil(n)) assert.Check(t, is.Equal(nodeNotFound, currState)) assert.Check(t, is.Nil(m)) closeNetworkDBInstances(t, dbs) } func TestChangeNodeState(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["node1"] = &node{Node: memberlist.Node{Name: "node1"}} dbs[0].nodes["node2"] = &node{Node: memberlist.Node{Name: "node2"}} dbs[0].nodes["node3"] = &node{Node: memberlist.Node{Name: "node3"}} // active nodes is 4 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 4)) n, currState, m := dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) // node1 to failed dbs[0].changeNodeState("node1", nodeFailedState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeFailedState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) // node1 back to active dbs[0].changeNodeState("node1", nodeActiveState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, is.Equal(time.Duration(0), n.reapTime)) // node1 to left dbs[0].changeNodeState("node1", nodeLeftState) dbs[0].changeNodeState("node2", nodeLeftState) dbs[0].changeNodeState("node3", nodeLeftState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) n, currState, m = dbs[0].findNode("node2") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node2", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) n, currState, m = dbs[0].findNode("node3") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node3", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) // active nodes is 1 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 1)) assert.Check(t, is.Len(dbs[0].failedNodes, 0)) assert.Check(t, is.Len(dbs[0].leftNodes, 3)) closeNetworkDBInstances(t, dbs) } func TestNodeReincarnation(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["node1"] = &node{Node: memberlist.Node{Name: "node1", Addr: net.ParseIP("192.168.1.1")}} dbs[0].leftNodes["node2"] = &node{Node: memberlist.Node{Name: "node2", Addr: net.ParseIP("192.168.1.2")}} dbs[0].failedNodes["node3"] = &node{Node: memberlist.Node{Name: "node3", Addr: net.ParseIP("192.168.1.3")}} // active nodes is 2 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 2)) assert.Check(t, is.Len(dbs[0].failedNodes, 1)) assert.Check(t, is.Len(dbs[0].leftNodes, 1)) b := dbs[0].purgeReincarnation(&memberlist.Node{Name: "node4", Addr: net.ParseIP("192.168.1.1")}) assert.Check(t, b) dbs[0].nodes["node4"] = &node{Node: memberlist.Node{Name: "node4", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node5", Addr: net.ParseIP("192.168.1.2")}) assert.Check(t, b) dbs[0].nodes["node5"] = &node{Node: memberlist.Node{Name: "node5", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.3")}) assert.Check(t, b) dbs[0].nodes["node6"] = &node{Node: memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.10")}) assert.Check(t, !b) // active nodes is 1 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 4)) assert.Check(t, is.Len(dbs[0].failedNodes, 0)) assert.Check(t, is.Len(dbs[0].leftNodes, 3)) closeNetworkDBInstances(t, dbs) } func TestParallelCreate(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) startCh := make(chan int) doneCh := make(chan error) var success int32 for i := 0; i < 20; i++ { go func() { <-startCh err := dbs[0].CreateEntry("testTable", "testNetwork", "key", []byte("value")) if err == nil { atomic.AddInt32(&success, 1) } doneCh <- err }() } close(startCh) for i := 0; i < 20; i++ { <-doneCh } close(doneCh) // Only 1 write should have succeeded assert.Check(t, is.Equal(int32(1), success)) closeNetworkDBInstances(t, dbs) } func TestParallelDelete(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) err := dbs[0].CreateEntry("testTable", "testNetwork", "key", []byte("value")) assert.NilError(t, err) startCh := make(chan int) doneCh := make(chan error) var success int32 for i := 0; i < 20; i++ { go func() { <-startCh err := dbs[0].DeleteEntry("testTable", "testNetwork", "key") if err == nil { atomic.AddInt32(&success, 1) } doneCh <- err }() } close(startCh) for i := 0; i < 20; i++ { <-doneCh } close(doneCh) // Only 1 write should have succeeded assert.Check(t, is.Equal(int32(1), success)) closeNetworkDBInstances(t, dbs) } func TestNetworkDBIslands(t *testing.T) { pollTimeout := func() time.Duration { const defaultTimeout = 120 * time.Second dl, ok := t.Deadline() if !ok { return defaultTimeout } if d := time.Until(dl); d <= defaultTimeout { return d } return defaultTimeout } logrus.SetLevel(logrus.DebugLevel) conf := DefaultConfig() // Shorten durations to speed up test execution. conf.rejoinClusterDuration = conf.rejoinClusterDuration / 10 conf.rejoinClusterInterval = conf.rejoinClusterInterval / 10 dbs := createNetworkDBInstances(t, 5, "node", conf) // Get the node IP used currently node := dbs[0].nodes[dbs[0].config.NodeID] baseIPStr := node.Addr.String() // Node 0,1,2 are going to be the 3 bootstrap nodes members := []string{fmt.Sprintf("%s:%d", baseIPStr, dbs[0].config.BindPort), fmt.Sprintf("%s:%d", baseIPStr, dbs[1].config.BindPort), fmt.Sprintf("%s:%d", baseIPStr, dbs[2].config.BindPort)} // Rejoining will update the list of the bootstrap members for i := 3; i < 5; i++ { t.Logf("Re-joining: %d", i) assert.Check(t, dbs[i].Join(members)) } // Now the 3 bootstrap nodes will cleanly leave, and will be properly removed from the other 2 nodes for i := 0; i < 3; i++ { logrus.Infof("node %d leaving", i) dbs[i].Close() } checkDBs := make(map[string]*NetworkDB) for i := 3; i < 5; i++ { db := dbs[i] checkDBs[db.config.Hostname] = db } // Give some time to let the system propagate the messages and free up the ports check := func(t poll.LogT) poll.Result { // Verify that the nodes are actually all gone and marked appropiately for name, db := range checkDBs { db.RLock() if (len(db.leftNodes) != 3) || (len(db.failedNodes) != 0) { for name := range db.leftNodes { t.Logf("%s: Node %s left", db.config.Hostname, name) } for name := range db.failedNodes { t.Logf("%s: Node %s failed", db.config.Hostname, name) } db.RUnlock() return poll.Continue("%s:Waiting for all nodes to cleanly leave, left: %d, failed nodes: %d", name, len(db.leftNodes), len(db.failedNodes)) } db.RUnlock() t.Logf("%s: OK", name) delete(checkDBs, name) } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(time.Second), poll.WithTimeout(pollTimeout())) // Spawn again the first 3 nodes with different names but same IP:port for i := 0; i < 3; i++ { logrus.Infof("node %d coming back", i) dbs[i].config.NodeID = stringid.TruncateID(stringid.GenerateRandomID()) dbs[i] = launchNode(t, *dbs[i].config) } // Give some time for the reconnect routine to run, it runs every 6s. check = func(t poll.LogT) poll.Result { // Verify that the cluster is again all connected. Note that the 3 previous node did not do any join for i := 0; i < 5; i++ { db := dbs[i] db.RLock() if len(db.nodes) != 5 { db.RUnlock() return poll.Continue("%s:Waiting to connect to all nodes", dbs[i].config.Hostname) } if len(db.failedNodes) != 0 { db.RUnlock() return poll.Continue("%s:Waiting for 0 failedNodes", dbs[i].config.Hostname) } if i < 3 { // nodes from 0 to 3 has no left nodes if len(db.leftNodes) != 0 { db.RUnlock() return poll.Continue("%s:Waiting to have no leftNodes", dbs[i].config.Hostname) } } else { // nodes from 4 to 5 has the 3 previous left nodes if len(db.leftNodes) != 3 { db.RUnlock() return poll.Continue("%s:Waiting to have 3 leftNodes", dbs[i].config.Hostname) } } db.RUnlock() } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(time.Second), poll.WithTimeout(pollTimeout())) closeNetworkDBInstances(t, dbs) }
rvolosatovs
34058bc1d2ec49b06d56476c2ef41a7c391eb5ad
3aa7d80e042fc5012574649e7005b0ad6ac263ca
See commit message, perhaps someone more familiar with the codebase can confirm that we don't need to alter the existing `networkdb` implementation to account for any potential change in `memberlist`? @thaJeztah ? Note, that if this test was flaky before, then we should not have anything to worry about (that would mean that the update did not cause the issue, but perhaps just made it fail more often, due to e.g. improved performance)
rvolosatovs
4,545
moby/moby
42,625
Fix flaky libnetwork/networkdb tests
**- What I did** Make `TestNetworkDBIslands` run faster and respect `-timeout` `go test` flag value (Partially?) address #42459 Fix `TestNetworkDBNodeJoinLeaveIteration` and `TestNetworkDBCRUDTableEntries` as well, since they now fail constantly and break CI. Potentially, that can be extracted into another PR, which this one would be blocked on, but I don't think it's worth it, since the changes are minimal, let me know if you think otherwise. **- How I did it** - Make rejoin intervals configurable, specify a lower interval during test and account for `test.Deadline` value - Update `github.com/hashicorp/memberlist`. Note that currently used version dates back to 2017. There have been a few improvements and bugfixes made since, in particular, the logic handling left/failed nodes was improved. **- How to verify it** I believe this commit is of most interest https://github.com/hashicorp/memberlist/commit/237d410aa2bf83254678ef78dd638480780e54a2, but I have not made a thorough analysis, whether this is indeed the "fix" we need. Note, I have encountered the following failure only once in ~100 runs: ``` === Failed === FAIL: libnetwork/networkdb TestNetworkDBIslands (22.03s) time="2021-07-12T15:12:17Z" level=info msg="New memberlist node - Node:node1 will use memberlist nodeID:f135e2881fc7 with config:&{NodeID:f135e2881fc7 Hostname:node1 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10001 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:17Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:17Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:17Z" level=info msg="New memberlist node - Node:node2 will use memberlist nodeID:12d1d234a539 with config:&{NodeID:12d1d234a539 Hostname:node2 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10002 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:17Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:17Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:17Z" level=info msg="The new bootstrap node list is:[localhost:10001]" time="2021-07-12T15:12:17Z" level=debug msg="memberlist: Stream connection from=[::1]:36282" time="2021-07-12T15:12:17Z" level=debug msg="memberlist: Initiating push/pull sync with: [::1]:10001" time="2021-07-12T15:12:17Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:17Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:17Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:17Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:17Z" level=debug msg="memberlist: Initiating push/pull sync with: 127.0.0.1:10001" time="2021-07-12T15:12:17Z" level=debug msg="memberlist: Stream connection from=127.0.0.1:46710" time="2021-07-12T15:12:18Z" level=info msg="New memberlist node - Node:node3 will use memberlist nodeID:e9889fb524ad with config:&{NodeID:e9889fb524ad Hostname:node3 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10003 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="The new bootstrap node list is:[localhost:10002]" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: [::1]:10002" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=[::1]:58118" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: 127.0.0.1:10002" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=127.0.0.1:43476" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="New memberlist node - Node:node4 will use memberlist nodeID:dd853f955091 with config:&{NodeID:dd853f955091 Hostname:node4 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10004 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="The new bootstrap node list is:[localhost:10003]" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: [::1]:10003" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=[::1]:39720" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: 127.0.0.1:10003" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=127.0.0.1:34386" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="New memberlist node - Node:node5 will use memberlist nodeID:644827c34b56 with config:&{NodeID:644827c34b56 Hostname:node5 BindAddr:0.0.0.0 AdvertiseAddr: BindPort:10005 Keys:[] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 rejoinClusterDuration:1000000000 rejoinClusterInterval:6000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s}" time="2021-07-12T15:12:18Z" level=info msg="Node 644827c34b56/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 644827c34b56/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="The new bootstrap node list is:[localhost:10004]" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: [::1]:10004" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=[::1]:35756" time="2021-07-12T15:12:18Z" level=info msg="Node 644827c34b56/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 644827c34b56/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node e9889fb524ad/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node f135e2881fc7/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node 12d1d234a539/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:18Z" level=info msg="Node dd853f955091/172.17.0.2, added to nodes list" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Initiating push/pull sync with: 127.0.0.1:10004" time="2021-07-12T15:12:18Z" level=debug msg="memberlist: Stream connection from=127.0.0.1:43650" time="2021-07-12T15:12:19Z" level=info msg="Node 644827c34b56/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:19Z" level=info msg="Node 644827c34b56/172.17.0.2, joined gossip cluster" time="2021-07-12T15:12:19Z" level=info msg="Node 644827c34b56/172.17.0.2, added to nodes list" time="2021-07-12T15:12:19Z" level=info msg="Node 644827c34b56/172.17.0.2, added to nodes list" time="2021-07-12T15:12:23Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:24Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:24Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:24Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:29Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:30Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:30Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:30Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:35Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:36Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:36Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" time="2021-07-12T15:12:36Z" level=debug msg="rejoinClusterBootStrap did not find any valid IP" networkdb_test.go:839: timeout hit after 20s: node3:Waiting for cluser peers to be established ``` That might be just an unfortunate slowness of my system, but it may be another source of flakiness, which has to be investigated still. I propose not to close the original issue just yet, and wait for a few days/weeks to see if the test is still flaky and make the decision then. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-12 15:48:25+00:00
2021-07-19 13:35:07+00:00
libnetwork/networkdb/networkdb_test.go
package networkdb import ( "fmt" "io/ioutil" "log" "net" "os" "strconv" "sync/atomic" "testing" "time" "github.com/docker/docker/pkg/stringid" "github.com/docker/go-events" "github.com/hashicorp/memberlist" "github.com/sirupsen/logrus" "gotest.tools/v3/assert" is "gotest.tools/v3/assert/cmp" "gotest.tools/v3/poll" // this takes care of the incontainer flag _ "github.com/docker/docker/libnetwork/testutils" ) var dbPort int32 = 10000 func TestMain(m *testing.M) { ioutil.WriteFile("/proc/sys/net/ipv6/conf/lo/disable_ipv6", []byte{'0', '\n'}, 0644) logrus.SetLevel(logrus.ErrorLevel) os.Exit(m.Run()) } func launchNode(t *testing.T, conf Config) *NetworkDB { t.Helper() db, err := New(&conf) assert.NilError(t, err) return db } func createNetworkDBInstances(t *testing.T, num int, namePrefix string, conf *Config) []*NetworkDB { t.Helper() var dbs []*NetworkDB for i := 0; i < num; i++ { localConfig := *conf localConfig.Hostname = fmt.Sprintf("%s%d", namePrefix, i+1) localConfig.NodeID = stringid.TruncateID(stringid.GenerateRandomID()) localConfig.BindPort = int(atomic.AddInt32(&dbPort, 1)) db := launchNode(t, localConfig) if i != 0 { assert.Check(t, db.Join([]string{fmt.Sprintf("localhost:%d", db.config.BindPort-1)})) } dbs = append(dbs, db) } // Wait till the cluster creation is successful check := func(t poll.LogT) poll.Result { // Check that the cluster is properly created for i := 0; i < num; i++ { if num != len(dbs[i].ClusterPeers()) { return poll.Continue("%s:Waiting for cluser peers to be established", dbs[i].config.Hostname) } } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(2*time.Second), poll.WithTimeout(20*time.Second)) return dbs } func closeNetworkDBInstances(t *testing.T, dbs []*NetworkDB) { t.Helper() log.Print("Closing DB instances...") for _, db := range dbs { db.Close() } } func (db *NetworkDB) verifyNodeExistence(t *testing.T, node string, present bool) { t.Helper() for i := 0; i < 80; i++ { db.RLock() _, ok := db.nodes[node] db.RUnlock() if present && ok { return } if !present && !ok { return } time.Sleep(50 * time.Millisecond) } t.Errorf("%v(%v): Node existence verification for node %s failed", db.config.Hostname, db.config.NodeID, node) } func (db *NetworkDB) verifyNetworkExistence(t *testing.T, node string, id string, present bool) { t.Helper() for i := 0; i < 80; i++ { db.RLock() nn, nnok := db.networks[node] db.RUnlock() if nnok { n, ok := nn[id] if present && ok { return } if !present && ((ok && n.leaving) || !ok) { return } } time.Sleep(50 * time.Millisecond) } t.Error("Network existence verification failed") } func (db *NetworkDB) verifyEntryExistence(t *testing.T, tname, nid, key, value string, present bool) { t.Helper() n := 80 for i := 0; i < n; i++ { entry, err := db.getEntry(tname, nid, key) if present && err == nil && string(entry.value) == value { return } if !present && ((err == nil && entry.deleting) || (err != nil)) { return } if i == n-1 && !present && err != nil { return } time.Sleep(50 * time.Millisecond) } t.Errorf("Entry existence verification test failed for %v(%v)", db.config.Hostname, db.config.NodeID) } func testWatch(t *testing.T, ch chan events.Event, ev interface{}, tname, nid, key, value string) { t.Helper() select { case rcvdEv := <-ch: assert.Check(t, is.Equal(fmt.Sprintf("%T", rcvdEv), fmt.Sprintf("%T", ev))) switch typ := rcvdEv.(type) { case CreateEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) assert.Check(t, is.Equal(value, string(typ.Value))) case UpdateEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) assert.Check(t, is.Equal(value, string(typ.Value))) case DeleteEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) } case <-time.After(time.Second): t.Fail() return } } func TestNetworkDBSimple(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) closeNetworkDBInstances(t, dbs) } func TestNetworkDBJoinLeaveNetwork(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", false) closeNetworkDBInstances(t, dbs) } func TestNetworkDBJoinLeaveNetworks(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) n := 10 for i := 1; i <= n; i++ { err := dbs[0].JoinNetwork(fmt.Sprintf("network0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err := dbs[1].JoinNetwork(fmt.Sprintf("network1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, fmt.Sprintf("network0%d", i), true) } for i := 1; i <= n; i++ { dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, fmt.Sprintf("network1%d", i), true) } for i := 1; i <= n; i++ { err := dbs[0].LeaveNetwork(fmt.Sprintf("network0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err := dbs[1].LeaveNetwork(fmt.Sprintf("network1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, fmt.Sprintf("network0%d", i), false) } for i := 1; i <= n; i++ { dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, fmt.Sprintf("network1%d", i), false) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDTableEntry(t *testing.T) { dbs := createNetworkDBInstances(t, 3, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) dbs[2].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", false) err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_updated_value", true) err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "", false) closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDTableEntries(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) n := 10 for i := 1; i <= n; i++ { err = dbs[0].CreateEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i), []byte(fmt.Sprintf("test_value0%d", i))) assert.NilError(t, err) } for i := 1; i <= n; i++ { err = dbs[1].CreateEntry("test_table", "network1", fmt.Sprintf("test_key1%d", i), []byte(fmt.Sprintf("test_value1%d", i))) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[0].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key1%d", i), fmt.Sprintf("test_value1%d", i), true) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), fmt.Sprintf("test_value0%d", i), true) assert.NilError(t, err) } // Verify deletes for i := 1; i <= n; i++ { err = dbs[0].DeleteEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err = dbs[1].DeleteEntry("test_table", "network1", fmt.Sprintf("test_key1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[0].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key1%d", i), "", false) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), "", false) assert.NilError(t, err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBNodeLeave(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) dbs[0].Close() dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", false) dbs[1].Close() } func TestNetworkDBWatch(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) ch, cancel := dbs[1].Watch("", "", "") err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) testWatch(t, ch.C, CreateEvent{}, "test_table", "network1", "test_key", "test_value") err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) testWatch(t, ch.C, UpdateEvent{}, "test_table", "network1", "test_key", "test_updated_value") err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) testWatch(t, ch.C, DeleteEvent{}, "test_table", "network1", "test_key", "") cancel() closeNetworkDBInstances(t, dbs) } func TestNetworkDBBulkSync(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) n := 1000 for i := 1; i <= n; i++ { err = dbs[0].CreateEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i), []byte(fmt.Sprintf("test_value0%d", i))) assert.NilError(t, err) } err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, "network1", true) for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), fmt.Sprintf("test_value0%d", i), true) assert.NilError(t, err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDMediumCluster(t *testing.T) { n := 5 dbs := createNetworkDBInstances(t, n, "node", DefaultConfig()) for i := 0; i < n; i++ { for j := 0; j < n; j++ { if i == j { continue } dbs[i].verifyNodeExistence(t, dbs[j].config.NodeID, true) } } for i := 0; i < n; i++ { err := dbs[i].JoinNetwork("network1") assert.NilError(t, err) } for i := 0; i < n; i++ { for j := 0; j < n; j++ { dbs[i].verifyNetworkExistence(t, dbs[j].config.NodeID, "network1", true) } } err := dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) } err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_updated_value", true) } err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "", false) } for i := 1; i < n; i++ { _, err = dbs[i].GetEntry("test_table", "network1", "test_key") assert.Check(t, is.ErrorContains(err, "")) assert.Check(t, is.Contains(err.Error(), "deleted and pending garbage collection"), err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBNodeJoinLeaveIteration(t *testing.T) { maxRetry := 5 dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) // Single node Join/Leave err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) if len(dbs[0].networkNodes["network1"]) != 1 { t.Fatalf("The networkNodes list has to have be 1 instead of %d", len(dbs[0].networkNodes["network1"])) } err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) if len(dbs[0].networkNodes["network1"]) != 0 { t.Fatalf("The networkNodes list has to have be 0 instead of %d", len(dbs[0].networkNodes["network1"])) } // Multiple nodes Join/Leave err = dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) // Wait for the propagation on db[0] for i := 0; i < maxRetry; i++ { if len(dbs[0].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[0].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[0].networkNodes["network1"]), dbs[0].networkNodes["network1"]) } if n, ok := dbs[0].networks[dbs[0].config.NodeID]["network1"]; !ok || n.leaving { t.Fatalf("The network should not be marked as leaving:%t", n.leaving) } // Wait for the propagation on db[1] for i := 0; i < maxRetry; i++ { if len(dbs[1].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[1].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[1].networkNodes["network1"]), dbs[1].networkNodes["network1"]) } if n, ok := dbs[1].networks[dbs[1].config.NodeID]["network1"]; !ok || n.leaving { t.Fatalf("The network should not be marked as leaving:%t", n.leaving) } // Try a quick leave/join err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) err = dbs[0].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < maxRetry; i++ { if len(dbs[0].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[0].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[0].networkNodes["network1"]), dbs[0].networkNodes["network1"]) } for i := 0; i < maxRetry; i++ { if len(dbs[1].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[1].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[1].networkNodes["network1"]), dbs[1].networkNodes["network1"]) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBGarbageCollection(t *testing.T) { keysWriteDelete := 5 config := DefaultConfig() config.reapEntryInterval = 30 * time.Second config.StatsPrintPeriod = 15 * time.Second dbs := createNetworkDBInstances(t, 3, "node", config) // 2 Nodes join network err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < keysWriteDelete; i++ { err = dbs[i%2].CreateEntry("testTable", "network1", "key-"+strconv.Itoa(i), []byte("value")) assert.NilError(t, err) } time.Sleep(time.Second) for i := 0; i < keysWriteDelete; i++ { err = dbs[i%2].DeleteEntry("testTable", "network1", "key-"+strconv.Itoa(i)) assert.NilError(t, err) } for i := 0; i < 2; i++ { assert.Check(t, is.Equal(keysWriteDelete, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries number should match") } // from this point the timer for the garbage collection started, wait 5 seconds and then join a new node time.Sleep(5 * time.Second) err = dbs[2].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(keysWriteDelete, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries number should match") } // at this point the entries should had been all deleted time.Sleep(30 * time.Second) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(0, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries should had been garbage collected") } // make sure that entries are not coming back time.Sleep(15 * time.Second) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(0, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries should had been garbage collected") } closeNetworkDBInstances(t, dbs) } func TestFindNode(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["active"] = &node{Node: memberlist.Node{Name: "active"}} dbs[0].failedNodes["failed"] = &node{Node: memberlist.Node{Name: "failed"}} dbs[0].leftNodes["left"] = &node{Node: memberlist.Node{Name: "left"}} // active nodes is 2 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 2)) assert.Check(t, is.Len(dbs[0].failedNodes, 1)) assert.Check(t, is.Len(dbs[0].leftNodes, 1)) n, currState, m := dbs[0].findNode("active") assert.Check(t, n != nil) assert.Check(t, is.Equal("active", n.Name)) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, m != nil) // delete the entry manually delete(m, "active") // test if can be still find n, currState, m = dbs[0].findNode("active") assert.Check(t, is.Nil(n)) assert.Check(t, is.Equal(nodeNotFound, currState)) assert.Check(t, is.Nil(m)) n, currState, m = dbs[0].findNode("failed") assert.Check(t, n != nil) assert.Check(t, is.Equal("failed", n.Name)) assert.Check(t, is.Equal(nodeFailedState, currState)) assert.Check(t, m != nil) // find and remove n, currState, m = dbs[0].findNode("left") assert.Check(t, n != nil) assert.Check(t, is.Equal("left", n.Name)) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, m != nil) delete(m, "left") n, currState, m = dbs[0].findNode("left") assert.Check(t, is.Nil(n)) assert.Check(t, is.Equal(nodeNotFound, currState)) assert.Check(t, is.Nil(m)) closeNetworkDBInstances(t, dbs) } func TestChangeNodeState(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["node1"] = &node{Node: memberlist.Node{Name: "node1"}} dbs[0].nodes["node2"] = &node{Node: memberlist.Node{Name: "node2"}} dbs[0].nodes["node3"] = &node{Node: memberlist.Node{Name: "node3"}} // active nodes is 4 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 4)) n, currState, m := dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) // node1 to failed dbs[0].changeNodeState("node1", nodeFailedState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeFailedState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) // node1 back to active dbs[0].changeNodeState("node1", nodeActiveState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, is.Equal(time.Duration(0), n.reapTime)) // node1 to left dbs[0].changeNodeState("node1", nodeLeftState) dbs[0].changeNodeState("node2", nodeLeftState) dbs[0].changeNodeState("node3", nodeLeftState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) n, currState, m = dbs[0].findNode("node2") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node2", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) n, currState, m = dbs[0].findNode("node3") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node3", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) // active nodes is 1 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 1)) assert.Check(t, is.Len(dbs[0].failedNodes, 0)) assert.Check(t, is.Len(dbs[0].leftNodes, 3)) closeNetworkDBInstances(t, dbs) } func TestNodeReincarnation(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["node1"] = &node{Node: memberlist.Node{Name: "node1", Addr: net.ParseIP("192.168.1.1")}} dbs[0].leftNodes["node2"] = &node{Node: memberlist.Node{Name: "node2", Addr: net.ParseIP("192.168.1.2")}} dbs[0].failedNodes["node3"] = &node{Node: memberlist.Node{Name: "node3", Addr: net.ParseIP("192.168.1.3")}} // active nodes is 2 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 2)) assert.Check(t, is.Len(dbs[0].failedNodes, 1)) assert.Check(t, is.Len(dbs[0].leftNodes, 1)) b := dbs[0].purgeReincarnation(&memberlist.Node{Name: "node4", Addr: net.ParseIP("192.168.1.1")}) assert.Check(t, b) dbs[0].nodes["node4"] = &node{Node: memberlist.Node{Name: "node4", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node5", Addr: net.ParseIP("192.168.1.2")}) assert.Check(t, b) dbs[0].nodes["node5"] = &node{Node: memberlist.Node{Name: "node5", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.3")}) assert.Check(t, b) dbs[0].nodes["node6"] = &node{Node: memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.10")}) assert.Check(t, !b) // active nodes is 1 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 4)) assert.Check(t, is.Len(dbs[0].failedNodes, 0)) assert.Check(t, is.Len(dbs[0].leftNodes, 3)) closeNetworkDBInstances(t, dbs) } func TestParallelCreate(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) startCh := make(chan int) doneCh := make(chan error) var success int32 for i := 0; i < 20; i++ { go func() { <-startCh err := dbs[0].CreateEntry("testTable", "testNetwork", "key", []byte("value")) if err == nil { atomic.AddInt32(&success, 1) } doneCh <- err }() } close(startCh) for i := 0; i < 20; i++ { <-doneCh } close(doneCh) // Only 1 write should have succeeded assert.Check(t, is.Equal(int32(1), success)) closeNetworkDBInstances(t, dbs) } func TestParallelDelete(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) err := dbs[0].CreateEntry("testTable", "testNetwork", "key", []byte("value")) assert.NilError(t, err) startCh := make(chan int) doneCh := make(chan error) var success int32 for i := 0; i < 20; i++ { go func() { <-startCh err := dbs[0].DeleteEntry("testTable", "testNetwork", "key") if err == nil { atomic.AddInt32(&success, 1) } doneCh <- err }() } close(startCh) for i := 0; i < 20; i++ { <-doneCh } close(doneCh) // Only 1 write should have succeeded assert.Check(t, is.Equal(int32(1), success)) closeNetworkDBInstances(t, dbs) } func TestNetworkDBIslands(t *testing.T) { logrus.SetLevel(logrus.DebugLevel) dbs := createNetworkDBInstances(t, 5, "node", DefaultConfig()) // Get the node IP used currently node := dbs[0].nodes[dbs[0].config.NodeID] baseIPStr := node.Addr.String() // Node 0,1,2 are going to be the 3 bootstrap nodes members := []string{fmt.Sprintf("%s:%d", baseIPStr, dbs[0].config.BindPort), fmt.Sprintf("%s:%d", baseIPStr, dbs[1].config.BindPort), fmt.Sprintf("%s:%d", baseIPStr, dbs[2].config.BindPort)} // Rejoining will update the list of the bootstrap members for i := 3; i < 5; i++ { t.Logf("Re-joining: %d", i) assert.Check(t, dbs[i].Join(members)) } // Now the 3 bootstrap nodes will cleanly leave, and will be properly removed from the other 2 nodes for i := 0; i < 3; i++ { logrus.Infof("node %d leaving", i) dbs[i].Close() } checkDBs := make(map[string]*NetworkDB) for i := 3; i < 5; i++ { db := dbs[i] checkDBs[db.config.Hostname] = db } // Give some time to let the system propagate the messages and free up the ports check := func(t poll.LogT) poll.Result { // Verify that the nodes are actually all gone and marked appropiately for name, db := range checkDBs { db.RLock() if (len(db.leftNodes) != 3) || (len(db.failedNodes) != 0) { for name := range db.leftNodes { t.Logf("%s: Node %s left", db.config.Hostname, name) } for name := range db.failedNodes { t.Logf("%s: Node %s failed", db.config.Hostname, name) } db.RUnlock() return poll.Continue("%s:Waiting for all nodes to cleanly leave, left: %d, failed nodes: %d", name, len(db.leftNodes), len(db.failedNodes)) } db.RUnlock() t.Logf("%s: OK", name) delete(checkDBs, name) } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(time.Second), poll.WithTimeout(120*time.Second)) // Spawn again the first 3 nodes with different names but same IP:port for i := 0; i < 3; i++ { logrus.Infof("node %d coming back", i) dbs[i].config.NodeID = stringid.TruncateID(stringid.GenerateRandomID()) dbs[i] = launchNode(t, *dbs[i].config) } // Give some time for the reconnect routine to run, it runs every 60s check = func(t poll.LogT) poll.Result { // Verify that the cluster is again all connected. Note that the 3 previous node did not do any join for i := 0; i < 5; i++ { db := dbs[i] db.RLock() if len(db.nodes) != 5 { db.RUnlock() return poll.Continue("%s:Waiting to connect to all nodes", dbs[i].config.Hostname) } if len(db.failedNodes) != 0 { db.RUnlock() return poll.Continue("%s:Waiting for 0 failedNodes", dbs[i].config.Hostname) } if i < 3 { // nodes from 0 to 3 has no left nodes if len(db.leftNodes) != 0 { db.RUnlock() return poll.Continue("%s:Waiting to have no leftNodes", dbs[i].config.Hostname) } } else { // nodes from 4 to 5 has the 3 previous left nodes if len(db.leftNodes) != 3 { db.RUnlock() return poll.Continue("%s:Waiting to have 3 leftNodes", dbs[i].config.Hostname) } } db.RUnlock() } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(10*time.Second), poll.WithTimeout(120*time.Second)) closeNetworkDBInstances(t, dbs) }
package networkdb import ( "fmt" "io/ioutil" "log" "net" "os" "strconv" "sync/atomic" "testing" "time" "github.com/docker/docker/pkg/stringid" "github.com/docker/go-events" "github.com/hashicorp/memberlist" "github.com/sirupsen/logrus" "gotest.tools/v3/assert" is "gotest.tools/v3/assert/cmp" "gotest.tools/v3/poll" // this takes care of the incontainer flag _ "github.com/docker/docker/libnetwork/testutils" ) var dbPort int32 = 10000 func TestMain(m *testing.M) { ioutil.WriteFile("/proc/sys/net/ipv6/conf/lo/disable_ipv6", []byte{'0', '\n'}, 0644) logrus.SetLevel(logrus.ErrorLevel) os.Exit(m.Run()) } func launchNode(t *testing.T, conf Config) *NetworkDB { t.Helper() db, err := New(&conf) assert.NilError(t, err) return db } func createNetworkDBInstances(t *testing.T, num int, namePrefix string, conf *Config) []*NetworkDB { t.Helper() var dbs []*NetworkDB for i := 0; i < num; i++ { localConfig := *conf localConfig.Hostname = fmt.Sprintf("%s%d", namePrefix, i+1) localConfig.NodeID = stringid.TruncateID(stringid.GenerateRandomID()) localConfig.BindPort = int(atomic.AddInt32(&dbPort, 1)) db := launchNode(t, localConfig) if i != 0 { assert.Check(t, db.Join([]string{fmt.Sprintf("localhost:%d", db.config.BindPort-1)})) } dbs = append(dbs, db) } // Wait till the cluster creation is successful check := func(t poll.LogT) poll.Result { // Check that the cluster is properly created for i := 0; i < num; i++ { if num != len(dbs[i].ClusterPeers()) { return poll.Continue("%s:Waiting for cluser peers to be established", dbs[i].config.Hostname) } } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(2*time.Second), poll.WithTimeout(20*time.Second)) return dbs } func closeNetworkDBInstances(t *testing.T, dbs []*NetworkDB) { t.Helper() log.Print("Closing DB instances...") for _, db := range dbs { db.Close() } } func (db *NetworkDB) verifyNodeExistence(t *testing.T, node string, present bool) { t.Helper() for i := 0; i < 80; i++ { db.RLock() _, ok := db.nodes[node] db.RUnlock() if present && ok { return } if !present && !ok { return } time.Sleep(50 * time.Millisecond) } t.Errorf("%v(%v): Node existence verification for node %s failed", db.config.Hostname, db.config.NodeID, node) } func (db *NetworkDB) verifyNetworkExistence(t *testing.T, node string, id string, present bool) { t.Helper() for i := 0; i < 80; i++ { db.RLock() nn, nnok := db.networks[node] db.RUnlock() if nnok { n, ok := nn[id] if present && ok { return } if !present && ((ok && n.leaving) || !ok) { return } } time.Sleep(50 * time.Millisecond) } t.Error("Network existence verification failed") } func (db *NetworkDB) verifyEntryExistence(t *testing.T, tname, nid, key, value string, present bool) { t.Helper() n := 80 for i := 0; i < n; i++ { entry, err := db.getEntry(tname, nid, key) if present && err == nil && string(entry.value) == value { return } if !present && ((err == nil && entry.deleting) || (err != nil)) { return } if i == n-1 && !present && err != nil { return } time.Sleep(50 * time.Millisecond) } t.Errorf("Entry existence verification test failed for %v(%v)", db.config.Hostname, db.config.NodeID) } func testWatch(t *testing.T, ch chan events.Event, ev interface{}, tname, nid, key, value string) { t.Helper() select { case rcvdEv := <-ch: assert.Check(t, is.Equal(fmt.Sprintf("%T", rcvdEv), fmt.Sprintf("%T", ev))) switch typ := rcvdEv.(type) { case CreateEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) assert.Check(t, is.Equal(value, string(typ.Value))) case UpdateEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) assert.Check(t, is.Equal(value, string(typ.Value))) case DeleteEvent: assert.Check(t, is.Equal(tname, typ.Table)) assert.Check(t, is.Equal(nid, typ.NetworkID)) assert.Check(t, is.Equal(key, typ.Key)) } case <-time.After(time.Second): t.Fail() return } } func TestNetworkDBSimple(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) closeNetworkDBInstances(t, dbs) } func TestNetworkDBJoinLeaveNetwork(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", false) closeNetworkDBInstances(t, dbs) } func TestNetworkDBJoinLeaveNetworks(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) n := 10 for i := 1; i <= n; i++ { err := dbs[0].JoinNetwork(fmt.Sprintf("network0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err := dbs[1].JoinNetwork(fmt.Sprintf("network1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, fmt.Sprintf("network0%d", i), true) } for i := 1; i <= n; i++ { dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, fmt.Sprintf("network1%d", i), true) } for i := 1; i <= n; i++ { err := dbs[0].LeaveNetwork(fmt.Sprintf("network0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err := dbs[1].LeaveNetwork(fmt.Sprintf("network1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, fmt.Sprintf("network0%d", i), false) } for i := 1; i <= n; i++ { dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, fmt.Sprintf("network1%d", i), false) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDTableEntry(t *testing.T) { dbs := createNetworkDBInstances(t, 3, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) dbs[2].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", false) err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_updated_value", true) err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "", false) closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDTableEntries(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, "network1", true) n := 10 for i := 1; i <= n; i++ { err = dbs[0].CreateEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i), []byte(fmt.Sprintf("test_value0%d", i))) assert.NilError(t, err) } for i := 1; i <= n; i++ { err = dbs[1].CreateEntry("test_table", "network1", fmt.Sprintf("test_key1%d", i), []byte(fmt.Sprintf("test_value1%d", i))) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[0].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key1%d", i), fmt.Sprintf("test_value1%d", i), true) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), fmt.Sprintf("test_value0%d", i), true) assert.NilError(t, err) } // Verify deletes for i := 1; i <= n; i++ { err = dbs[0].DeleteEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { err = dbs[1].DeleteEntry("test_table", "network1", fmt.Sprintf("test_key1%d", i)) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[0].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key1%d", i), "", false) assert.NilError(t, err) } for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), "", false) assert.NilError(t, err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBNodeLeave(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) dbs[0].Close() dbs[1].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", false) dbs[1].Close() } func TestNetworkDBWatch(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) ch, cancel := dbs[1].Watch("", "", "") err = dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) testWatch(t, ch.C, CreateEvent{}, "test_table", "network1", "test_key", "test_value") err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) testWatch(t, ch.C, UpdateEvent{}, "test_table", "network1", "test_key", "test_updated_value") err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) testWatch(t, ch.C, DeleteEvent{}, "test_table", "network1", "test_key", "") cancel() closeNetworkDBInstances(t, dbs) } func TestNetworkDBBulkSync(t *testing.T) { dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) n := 1000 for i := 1; i <= n; i++ { err = dbs[0].CreateEntry("test_table", "network1", fmt.Sprintf("test_key0%d", i), []byte(fmt.Sprintf("test_value0%d", i))) assert.NilError(t, err) } err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, "network1", true) for i := 1; i <= n; i++ { dbs[1].verifyEntryExistence(t, "test_table", "network1", fmt.Sprintf("test_key0%d", i), fmt.Sprintf("test_value0%d", i), true) assert.NilError(t, err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBCRUDMediumCluster(t *testing.T) { n := 5 dbs := createNetworkDBInstances(t, n, "node", DefaultConfig()) for i := 0; i < n; i++ { for j := 0; j < n; j++ { if i == j { continue } dbs[i].verifyNodeExistence(t, dbs[j].config.NodeID, true) } } for i := 0; i < n; i++ { err := dbs[i].JoinNetwork("network1") assert.NilError(t, err) } for i := 0; i < n; i++ { for j := 0; j < n; j++ { dbs[i].verifyNetworkExistence(t, dbs[j].config.NodeID, "network1", true) } } err := dbs[0].CreateEntry("test_table", "network1", "test_key", []byte("test_value")) assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_value", true) } err = dbs[0].UpdateEntry("test_table", "network1", "test_key", []byte("test_updated_value")) assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "test_updated_value", true) } err = dbs[0].DeleteEntry("test_table", "network1", "test_key") assert.NilError(t, err) for i := 1; i < n; i++ { dbs[i].verifyEntryExistence(t, "test_table", "network1", "test_key", "", false) } for i := 1; i < n; i++ { _, err = dbs[i].GetEntry("test_table", "network1", "test_key") assert.Check(t, is.ErrorContains(err, "")) assert.Check(t, is.Contains(err.Error(), "deleted and pending garbage collection"), err) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBNodeJoinLeaveIteration(t *testing.T) { maxRetry := 5 dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig()) // Single node Join/Leave err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) if len(dbs[0].networkNodes["network1"]) != 1 { t.Fatalf("The networkNodes list has to have be 1 instead of %d", len(dbs[0].networkNodes["network1"])) } err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) if len(dbs[0].networkNodes["network1"]) != 0 { t.Fatalf("The networkNodes list has to have be 0 instead of %d", len(dbs[0].networkNodes["network1"])) } // Multiple nodes Join/Leave err = dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) // Wait for the propagation on db[0] dbs[0].verifyNetworkExistence(t, dbs[1].config.NodeID, "network1", true) if len(dbs[0].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[0].networkNodes["network1"]), dbs[0].networkNodes["network1"]) } if n, ok := dbs[0].networks[dbs[0].config.NodeID]["network1"]; !ok || n.leaving { t.Fatalf("The network should not be marked as leaving:%t", n.leaving) } // Wait for the propagation on db[1] dbs[1].verifyNetworkExistence(t, dbs[0].config.NodeID, "network1", true) if len(dbs[1].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[1].networkNodes["network1"]), dbs[1].networkNodes["network1"]) } if n, ok := dbs[1].networks[dbs[1].config.NodeID]["network1"]; !ok || n.leaving { t.Fatalf("The network should not be marked as leaving:%t", n.leaving) } // Try a quick leave/join err = dbs[0].LeaveNetwork("network1") assert.NilError(t, err) err = dbs[0].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < maxRetry; i++ { if len(dbs[0].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[0].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[0].networkNodes["network1"]), dbs[0].networkNodes["network1"]) } for i := 0; i < maxRetry; i++ { if len(dbs[1].networkNodes["network1"]) == 2 { break } time.Sleep(1 * time.Second) } if len(dbs[1].networkNodes["network1"]) != 2 { t.Fatalf("The networkNodes list has to have be 2 instead of %d - %v", len(dbs[1].networkNodes["network1"]), dbs[1].networkNodes["network1"]) } closeNetworkDBInstances(t, dbs) } func TestNetworkDBGarbageCollection(t *testing.T) { keysWriteDelete := 5 config := DefaultConfig() config.reapEntryInterval = 30 * time.Second config.StatsPrintPeriod = 15 * time.Second dbs := createNetworkDBInstances(t, 3, "node", config) // 2 Nodes join network err := dbs[0].JoinNetwork("network1") assert.NilError(t, err) err = dbs[1].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < keysWriteDelete; i++ { err = dbs[i%2].CreateEntry("testTable", "network1", "key-"+strconv.Itoa(i), []byte("value")) assert.NilError(t, err) } time.Sleep(time.Second) for i := 0; i < keysWriteDelete; i++ { err = dbs[i%2].DeleteEntry("testTable", "network1", "key-"+strconv.Itoa(i)) assert.NilError(t, err) } for i := 0; i < 2; i++ { assert.Check(t, is.Equal(keysWriteDelete, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries number should match") } // from this point the timer for the garbage collection started, wait 5 seconds and then join a new node time.Sleep(5 * time.Second) err = dbs[2].JoinNetwork("network1") assert.NilError(t, err) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(keysWriteDelete, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries number should match") } // at this point the entries should had been all deleted time.Sleep(30 * time.Second) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(0, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries should had been garbage collected") } // make sure that entries are not coming back time.Sleep(15 * time.Second) for i := 0; i < 3; i++ { assert.Check(t, is.Equal(0, dbs[i].networks[dbs[i].config.NodeID]["network1"].entriesNumber), "entries should had been garbage collected") } closeNetworkDBInstances(t, dbs) } func TestFindNode(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["active"] = &node{Node: memberlist.Node{Name: "active"}} dbs[0].failedNodes["failed"] = &node{Node: memberlist.Node{Name: "failed"}} dbs[0].leftNodes["left"] = &node{Node: memberlist.Node{Name: "left"}} // active nodes is 2 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 2)) assert.Check(t, is.Len(dbs[0].failedNodes, 1)) assert.Check(t, is.Len(dbs[0].leftNodes, 1)) n, currState, m := dbs[0].findNode("active") assert.Check(t, n != nil) assert.Check(t, is.Equal("active", n.Name)) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, m != nil) // delete the entry manually delete(m, "active") // test if can be still find n, currState, m = dbs[0].findNode("active") assert.Check(t, is.Nil(n)) assert.Check(t, is.Equal(nodeNotFound, currState)) assert.Check(t, is.Nil(m)) n, currState, m = dbs[0].findNode("failed") assert.Check(t, n != nil) assert.Check(t, is.Equal("failed", n.Name)) assert.Check(t, is.Equal(nodeFailedState, currState)) assert.Check(t, m != nil) // find and remove n, currState, m = dbs[0].findNode("left") assert.Check(t, n != nil) assert.Check(t, is.Equal("left", n.Name)) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, m != nil) delete(m, "left") n, currState, m = dbs[0].findNode("left") assert.Check(t, is.Nil(n)) assert.Check(t, is.Equal(nodeNotFound, currState)) assert.Check(t, is.Nil(m)) closeNetworkDBInstances(t, dbs) } func TestChangeNodeState(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["node1"] = &node{Node: memberlist.Node{Name: "node1"}} dbs[0].nodes["node2"] = &node{Node: memberlist.Node{Name: "node2"}} dbs[0].nodes["node3"] = &node{Node: memberlist.Node{Name: "node3"}} // active nodes is 4 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 4)) n, currState, m := dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) // node1 to failed dbs[0].changeNodeState("node1", nodeFailedState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeFailedState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) // node1 back to active dbs[0].changeNodeState("node1", nodeActiveState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeActiveState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, is.Equal(time.Duration(0), n.reapTime)) // node1 to left dbs[0].changeNodeState("node1", nodeLeftState) dbs[0].changeNodeState("node2", nodeLeftState) dbs[0].changeNodeState("node3", nodeLeftState) n, currState, m = dbs[0].findNode("node1") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node1", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) n, currState, m = dbs[0].findNode("node2") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node2", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) n, currState, m = dbs[0].findNode("node3") assert.Check(t, n != nil) assert.Check(t, is.Equal(nodeLeftState, currState)) assert.Check(t, is.Equal("node3", n.Name)) assert.Check(t, m != nil) assert.Check(t, time.Duration(0) != n.reapTime) // active nodes is 1 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 1)) assert.Check(t, is.Len(dbs[0].failedNodes, 0)) assert.Check(t, is.Len(dbs[0].leftNodes, 3)) closeNetworkDBInstances(t, dbs) } func TestNodeReincarnation(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) dbs[0].nodes["node1"] = &node{Node: memberlist.Node{Name: "node1", Addr: net.ParseIP("192.168.1.1")}} dbs[0].leftNodes["node2"] = &node{Node: memberlist.Node{Name: "node2", Addr: net.ParseIP("192.168.1.2")}} dbs[0].failedNodes["node3"] = &node{Node: memberlist.Node{Name: "node3", Addr: net.ParseIP("192.168.1.3")}} // active nodes is 2 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 2)) assert.Check(t, is.Len(dbs[0].failedNodes, 1)) assert.Check(t, is.Len(dbs[0].leftNodes, 1)) b := dbs[0].purgeReincarnation(&memberlist.Node{Name: "node4", Addr: net.ParseIP("192.168.1.1")}) assert.Check(t, b) dbs[0].nodes["node4"] = &node{Node: memberlist.Node{Name: "node4", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node5", Addr: net.ParseIP("192.168.1.2")}) assert.Check(t, b) dbs[0].nodes["node5"] = &node{Node: memberlist.Node{Name: "node5", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.3")}) assert.Check(t, b) dbs[0].nodes["node6"] = &node{Node: memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.1")}} b = dbs[0].purgeReincarnation(&memberlist.Node{Name: "node6", Addr: net.ParseIP("192.168.1.10")}) assert.Check(t, !b) // active nodes is 1 because the testing node is in the list assert.Check(t, is.Len(dbs[0].nodes, 4)) assert.Check(t, is.Len(dbs[0].failedNodes, 0)) assert.Check(t, is.Len(dbs[0].leftNodes, 3)) closeNetworkDBInstances(t, dbs) } func TestParallelCreate(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) startCh := make(chan int) doneCh := make(chan error) var success int32 for i := 0; i < 20; i++ { go func() { <-startCh err := dbs[0].CreateEntry("testTable", "testNetwork", "key", []byte("value")) if err == nil { atomic.AddInt32(&success, 1) } doneCh <- err }() } close(startCh) for i := 0; i < 20; i++ { <-doneCh } close(doneCh) // Only 1 write should have succeeded assert.Check(t, is.Equal(int32(1), success)) closeNetworkDBInstances(t, dbs) } func TestParallelDelete(t *testing.T) { dbs := createNetworkDBInstances(t, 1, "node", DefaultConfig()) err := dbs[0].CreateEntry("testTable", "testNetwork", "key", []byte("value")) assert.NilError(t, err) startCh := make(chan int) doneCh := make(chan error) var success int32 for i := 0; i < 20; i++ { go func() { <-startCh err := dbs[0].DeleteEntry("testTable", "testNetwork", "key") if err == nil { atomic.AddInt32(&success, 1) } doneCh <- err }() } close(startCh) for i := 0; i < 20; i++ { <-doneCh } close(doneCh) // Only 1 write should have succeeded assert.Check(t, is.Equal(int32(1), success)) closeNetworkDBInstances(t, dbs) } func TestNetworkDBIslands(t *testing.T) { pollTimeout := func() time.Duration { const defaultTimeout = 120 * time.Second dl, ok := t.Deadline() if !ok { return defaultTimeout } if d := time.Until(dl); d <= defaultTimeout { return d } return defaultTimeout } logrus.SetLevel(logrus.DebugLevel) conf := DefaultConfig() // Shorten durations to speed up test execution. conf.rejoinClusterDuration = conf.rejoinClusterDuration / 10 conf.rejoinClusterInterval = conf.rejoinClusterInterval / 10 dbs := createNetworkDBInstances(t, 5, "node", conf) // Get the node IP used currently node := dbs[0].nodes[dbs[0].config.NodeID] baseIPStr := node.Addr.String() // Node 0,1,2 are going to be the 3 bootstrap nodes members := []string{fmt.Sprintf("%s:%d", baseIPStr, dbs[0].config.BindPort), fmt.Sprintf("%s:%d", baseIPStr, dbs[1].config.BindPort), fmt.Sprintf("%s:%d", baseIPStr, dbs[2].config.BindPort)} // Rejoining will update the list of the bootstrap members for i := 3; i < 5; i++ { t.Logf("Re-joining: %d", i) assert.Check(t, dbs[i].Join(members)) } // Now the 3 bootstrap nodes will cleanly leave, and will be properly removed from the other 2 nodes for i := 0; i < 3; i++ { logrus.Infof("node %d leaving", i) dbs[i].Close() } checkDBs := make(map[string]*NetworkDB) for i := 3; i < 5; i++ { db := dbs[i] checkDBs[db.config.Hostname] = db } // Give some time to let the system propagate the messages and free up the ports check := func(t poll.LogT) poll.Result { // Verify that the nodes are actually all gone and marked appropiately for name, db := range checkDBs { db.RLock() if (len(db.leftNodes) != 3) || (len(db.failedNodes) != 0) { for name := range db.leftNodes { t.Logf("%s: Node %s left", db.config.Hostname, name) } for name := range db.failedNodes { t.Logf("%s: Node %s failed", db.config.Hostname, name) } db.RUnlock() return poll.Continue("%s:Waiting for all nodes to cleanly leave, left: %d, failed nodes: %d", name, len(db.leftNodes), len(db.failedNodes)) } db.RUnlock() t.Logf("%s: OK", name) delete(checkDBs, name) } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(time.Second), poll.WithTimeout(pollTimeout())) // Spawn again the first 3 nodes with different names but same IP:port for i := 0; i < 3; i++ { logrus.Infof("node %d coming back", i) dbs[i].config.NodeID = stringid.TruncateID(stringid.GenerateRandomID()) dbs[i] = launchNode(t, *dbs[i].config) } // Give some time for the reconnect routine to run, it runs every 6s. check = func(t poll.LogT) poll.Result { // Verify that the cluster is again all connected. Note that the 3 previous node did not do any join for i := 0; i < 5; i++ { db := dbs[i] db.RLock() if len(db.nodes) != 5 { db.RUnlock() return poll.Continue("%s:Waiting to connect to all nodes", dbs[i].config.Hostname) } if len(db.failedNodes) != 0 { db.RUnlock() return poll.Continue("%s:Waiting for 0 failedNodes", dbs[i].config.Hostname) } if i < 3 { // nodes from 0 to 3 has no left nodes if len(db.leftNodes) != 0 { db.RUnlock() return poll.Continue("%s:Waiting to have no leftNodes", dbs[i].config.Hostname) } } else { // nodes from 4 to 5 has the 3 previous left nodes if len(db.leftNodes) != 3 { db.RUnlock() return poll.Continue("%s:Waiting to have 3 leftNodes", dbs[i].config.Hostname) } } db.RUnlock() } return poll.Success() } poll.WaitOn(t, check, poll.WithDelay(time.Second), poll.WithTimeout(pollTimeout())) closeNetworkDBInstances(t, dbs) }
rvolosatovs
34058bc1d2ec49b06d56476c2ef41a7c391eb5ad
3aa7d80e042fc5012574649e7005b0ad6ac263ca
Not very familiar with this either, but seems to make sense. Perhaps @arkodg (if he's still around) would know
thaJeztah
4,546
moby/moby
42,623
Remove containerd "platform" dependency from client
Note: once / if https://github.com/moby/moby/pull/42464 is merged/accepted, we should consider replacing `github.com/opencontainers/image-spec/specs-go/v1.Platform` with our own `ImagePlatform`, to remove the OCI spec from the API (if possible). If we do so, we can also remove the `formatPlatform` function here (as it would be provided by `ImagePlatform.String()` This removes some of the containerd dependencies from the client: - client: remove unused Platform field from configWrapper This field was added in 7a9cb29fb980c0ab3928272cdc24c7089b2fcf64, but appears to be unused, so removing it. - client: remove containerd "platform" dependency After this, there's no _direct_ dependency on containerd, but the `errdefs` package still depends on containerd's errdefs. package. I'll have a look if we can refactor so that that doesn't end up in the client package.
null
2021-07-12 10:41:26+00:00
2021-07-31 16:45:29+00:00
client/container_create.go
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "github.com/containerd/containerd/platforms" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig Platform *specs.Platform } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if platform != nil { query.Set("platform", platforms.Format(*platform)) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err }
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "path" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if p := formatPlatform(platform); p != "" { query.Set("platform", p) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err } // formatPlatform returns a formatted string representing platform (e.g. linux/arm/v7). // // Similar to containerd's platforms.Format(), but does allow components to be // omitted (e.g. pass "architecture" only, without "os": // https://github.com/containerd/containerd/blob/v1.5.2/platforms/platforms.go#L243-L263 func formatPlatform(platform *specs.Platform) string { if platform == nil { return "" } return path.Join(platform.OS, platform.Architecture, platform.Variant) }
thaJeztah
f07e53e0bb97c6364032bb14a020c50118eb7394
0b39cc2e57828c411a82ffebf7f52cf724e4fbe2
Let me call out here that containerd's format would return `"unknown"` in this case. As far as I can see, we have no use for the `"unknown"` value, and would treat it equivalent to `""` / not set, so I went for "don't set the parameter if we don't have anything useful". but perhaps I overlooked a specific scenario 👀 👀 👀 👀
thaJeztah
4,547
moby/moby
42,623
Remove containerd "platform" dependency from client
Note: once / if https://github.com/moby/moby/pull/42464 is merged/accepted, we should consider replacing `github.com/opencontainers/image-spec/specs-go/v1.Platform` with our own `ImagePlatform`, to remove the OCI spec from the API (if possible). If we do so, we can also remove the `formatPlatform` function here (as it would be provided by `ImagePlatform.String()` This removes some of the containerd dependencies from the client: - client: remove unused Platform field from configWrapper This field was added in 7a9cb29fb980c0ab3928272cdc24c7089b2fcf64, but appears to be unused, so removing it. - client: remove containerd "platform" dependency After this, there's no _direct_ dependency on containerd, but the `errdefs` package still depends on containerd's errdefs. package. I'll have a look if we can refactor so that that doesn't end up in the client package.
null
2021-07-12 10:41:26+00:00
2021-07-31 16:45:29+00:00
client/container_create.go
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "github.com/containerd/containerd/platforms" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig Platform *specs.Platform } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if platform != nil { query.Set("platform", platforms.Format(*platform)) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err }
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "path" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if p := formatPlatform(platform); p != "" { query.Set("platform", p) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err } // formatPlatform returns a formatted string representing platform (e.g. linux/arm/v7). // // Similar to containerd's platforms.Format(), but does allow components to be // omitted (e.g. pass "architecture" only, without "os": // https://github.com/containerd/containerd/blob/v1.5.2/platforms/platforms.go#L243-L263 func formatPlatform(platform *specs.Platform) string { if platform == nil { return "" } return path.Join(platform.OS, platform.Architecture, platform.Variant) }
thaJeztah
f07e53e0bb97c6364032bb14a020c50118eb7394
0b39cc2e57828c411a82ffebf7f52cf724e4fbe2
Seems like we really want `path.Join(platform.OS, platform.Arch....)` This will handle empty values as desired, even for empty values. https://play.golang.org/p/ztICsyudOrB Not sure if you want to validate that is always arch is set if variant is also set.
cpuguy83
4,548
moby/moby
42,623
Remove containerd "platform" dependency from client
Note: once / if https://github.com/moby/moby/pull/42464 is merged/accepted, we should consider replacing `github.com/opencontainers/image-spec/specs-go/v1.Platform` with our own `ImagePlatform`, to remove the OCI spec from the API (if possible). If we do so, we can also remove the `formatPlatform` function here (as it would be provided by `ImagePlatform.String()` This removes some of the containerd dependencies from the client: - client: remove unused Platform field from configWrapper This field was added in 7a9cb29fb980c0ab3928272cdc24c7089b2fcf64, but appears to be unused, so removing it. - client: remove containerd "platform" dependency After this, there's no _direct_ dependency on containerd, but the `errdefs` package still depends on containerd's errdefs. package. I'll have a look if we can refactor so that that doesn't end up in the client package.
null
2021-07-12 10:41:26+00:00
2021-07-31 16:45:29+00:00
client/container_create.go
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "github.com/containerd/containerd/platforms" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig Platform *specs.Platform } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if platform != nil { query.Set("platform", platforms.Format(*platform)) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err }
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "path" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if p := formatPlatform(platform); p != "" { query.Set("platform", p) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err } // formatPlatform returns a formatted string representing platform (e.g. linux/arm/v7). // // Similar to containerd's platforms.Format(), but does allow components to be // omitted (e.g. pass "architecture" only, without "os": // https://github.com/containerd/containerd/blob/v1.5.2/platforms/platforms.go#L243-L263 func formatPlatform(platform *specs.Platform) string { if platform == nil { return "" } return path.Join(platform.OS, platform.Architecture, platform.Variant) }
thaJeztah
f07e53e0bb97c6364032bb14a020c50118eb7394
0b39cc2e57828c411a82ffebf7f52cf724e4fbe2
Oh, nice! TIL `path.Join` skips the empty values (I assumed it would keep those in). Yes, I like that. > Not sure if you want to validate that is always arch is set if variant is also set. Yeah, perhaps stop at the first empty value (in which case `path.Join` would not be needed 🤔). We could keep it "simple" (for now), and just let the daemon handle this; wdyt? I was writing down notes on serializing / deserialising and normalising of oci.Platform (to start a discussion on the OCI spec), because currently it looks like there's no standard approach anywhere. So add to the "silliness"; I was looking at the CLI code YesterdayI: https://github.com/docker/cli/blob/v20.10.7/cli/command/container/create.go#L244-L249 ```go if opts.platform != "" && versions.GreaterThanOrEqualTo(dockerCli.Client().ClientVersion(), "1.41") { p, err := platforms.Parse(opts.platform) if err != nil { return nil, errors.Wrap(err, "error parsing specified platform") } platform = &p ``` So, we: 1. pass a string representation of platform (through the `--platform` flag) 2. parse it on the CLI to convert it to an `oci.Platform{}` 3. pass it to the client as argument 4. convert it to a string 5. send it over the API as a string 6. parse the string and convert it to an `oci.Platform{}` 7. pass it to the backend as argument Looking at that, I think we should consider to either have the client (and cli) to not bother with `oci.Platform{}` at all (pass a string, and let the daemon deal with it), or make the API accept a `oci.Platform{}` (or equivalent) in the request.
thaJeztah
4,549
moby/moby
42,623
Remove containerd "platform" dependency from client
Note: once / if https://github.com/moby/moby/pull/42464 is merged/accepted, we should consider replacing `github.com/opencontainers/image-spec/specs-go/v1.Platform` with our own `ImagePlatform`, to remove the OCI spec from the API (if possible). If we do so, we can also remove the `formatPlatform` function here (as it would be provided by `ImagePlatform.String()` This removes some of the containerd dependencies from the client: - client: remove unused Platform field from configWrapper This field was added in 7a9cb29fb980c0ab3928272cdc24c7089b2fcf64, but appears to be unused, so removing it. - client: remove containerd "platform" dependency After this, there's no _direct_ dependency on containerd, but the `errdefs` package still depends on containerd's errdefs. package. I'll have a look if we can refactor so that that doesn't end up in the client package.
null
2021-07-12 10:41:26+00:00
2021-07-31 16:45:29+00:00
client/container_create.go
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "github.com/containerd/containerd/platforms" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig Platform *specs.Platform } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if platform != nil { query.Set("platform", platforms.Format(*platform)) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err }
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "path" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if p := formatPlatform(platform); p != "" { query.Set("platform", p) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err } // formatPlatform returns a formatted string representing platform (e.g. linux/arm/v7). // // Similar to containerd's platforms.Format(), but does allow components to be // omitted (e.g. pass "architecture" only, without "os": // https://github.com/containerd/containerd/blob/v1.5.2/platforms/platforms.go#L243-L263 func formatPlatform(platform *specs.Platform) string { if platform == nil { return "" } return path.Join(platform.OS, platform.Architecture, platform.Variant) }
thaJeztah
f07e53e0bb97c6364032bb14a020c50118eb7394
0b39cc2e57828c411a82ffebf7f52cf724e4fbe2
So, buildkit supports just setting arch... weird that we'd return empty for empty OS.
cpuguy83
4,550
moby/moby
42,623
Remove containerd "platform" dependency from client
Note: once / if https://github.com/moby/moby/pull/42464 is merged/accepted, we should consider replacing `github.com/opencontainers/image-spec/specs-go/v1.Platform` with our own `ImagePlatform`, to remove the OCI spec from the API (if possible). If we do so, we can also remove the `formatPlatform` function here (as it would be provided by `ImagePlatform.String()` This removes some of the containerd dependencies from the client: - client: remove unused Platform field from configWrapper This field was added in 7a9cb29fb980c0ab3928272cdc24c7089b2fcf64, but appears to be unused, so removing it. - client: remove containerd "platform" dependency After this, there's no _direct_ dependency on containerd, but the `errdefs` package still depends on containerd's errdefs. package. I'll have a look if we can refactor so that that doesn't end up in the client package.
null
2021-07-12 10:41:26+00:00
2021-07-31 16:45:29+00:00
client/container_create.go
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "github.com/containerd/containerd/platforms" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig Platform *specs.Platform } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if platform != nil { query.Set("platform", platforms.Format(*platform)) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err }
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "path" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if p := formatPlatform(platform); p != "" { query.Set("platform", p) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err } // formatPlatform returns a formatted string representing platform (e.g. linux/arm/v7). // // Similar to containerd's platforms.Format(), but does allow components to be // omitted (e.g. pass "architecture" only, without "os": // https://github.com/containerd/containerd/blob/v1.5.2/platforms/platforms.go#L243-L263 func formatPlatform(platform *specs.Platform) string { if platform == nil { return "" } return path.Join(platform.OS, platform.Architecture, platform.Variant) }
thaJeztah
f07e53e0bb97c6364032bb14a020c50118eb7394
0b39cc2e57828c411a82ffebf7f52cf724e4fbe2
Any thoughts on this?
cpuguy83
4,551
moby/moby
42,623
Remove containerd "platform" dependency from client
Note: once / if https://github.com/moby/moby/pull/42464 is merged/accepted, we should consider replacing `github.com/opencontainers/image-spec/specs-go/v1.Platform` with our own `ImagePlatform`, to remove the OCI spec from the API (if possible). If we do so, we can also remove the `formatPlatform` function here (as it would be provided by `ImagePlatform.String()` This removes some of the containerd dependencies from the client: - client: remove unused Platform field from configWrapper This field was added in 7a9cb29fb980c0ab3928272cdc24c7089b2fcf64, but appears to be unused, so removing it. - client: remove containerd "platform" dependency After this, there's no _direct_ dependency on containerd, but the `errdefs` package still depends on containerd's errdefs. package. I'll have a look if we can refactor so that that doesn't end up in the client package.
null
2021-07-12 10:41:26+00:00
2021-07-31 16:45:29+00:00
client/container_create.go
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "github.com/containerd/containerd/platforms" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig Platform *specs.Platform } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if platform != nil { query.Set("platform", platforms.Format(*platform)) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err }
package client // import "github.com/docker/docker/client" import ( "context" "encoding/json" "net/url" "path" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/versions" specs "github.com/opencontainers/image-spec/specs-go/v1" ) type configWrapper struct { *container.Config HostConfig *container.HostConfig NetworkingConfig *network.NetworkingConfig } // ContainerCreate creates a new container based on the given configuration. // It can be associated with a name, but it's not mandatory. func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) { var response container.ContainerCreateCreatedBody if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil { return response, err } // When using API 1.24 and under, the client is responsible for removing the container if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") { hostConfig.AutoRemove = false } if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil { return response, err } query := url.Values{} if p := formatPlatform(platform); p != "" { query.Set("platform", p) } if containerName != "" { query.Set("name", containerName) } body := configWrapper{ Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig, } serverResp, err := cli.post(ctx, "/containers/create", query, body, nil) defer ensureReaderClosed(serverResp) if err != nil { return response, err } err = json.NewDecoder(serverResp.body).Decode(&response) return response, err } // formatPlatform returns a formatted string representing platform (e.g. linux/arm/v7). // // Similar to containerd's platforms.Format(), but does allow components to be // omitted (e.g. pass "architecture" only, without "os": // https://github.com/containerd/containerd/blob/v1.5.2/platforms/platforms.go#L243-L263 func formatPlatform(platform *specs.Platform) string { if platform == nil { return "" } return path.Join(platform.OS, platform.Architecture, platform.Variant) }
thaJeztah
f07e53e0bb97c6364032bb14a020c50118eb7394
0b39cc2e57828c411a82ffebf7f52cf724e4fbe2
Looked up the parsing on the daemon side, and looks indeed that that should be supported; if only `arch` is specified, the parsing on the daemon side should handle that; https://github.com/moby/moby/blob/ada51d6d29c7517812d5bbfc15614bf505393681/vendor/github.com/containerd/containerd/platforms/platforms.go#L182-L206
thaJeztah
4,552
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
Looks like this definition is referenced in two places in the swagger; - `/system/df` -> `responses.200.Containers` which uses it as items in an array; so the swagger incorrectly defined it as an "array of arrays" - `/containers/json` -> `responses -> 200` which used it directly, so an "array of objects" I think your change makes sense; i.e. change `ContainerSummary` to an object (not an array of objects), so to make swagger validation pass, the `/containers/json` response needs to be updated. If you amend the third commit ("Fix ContainerSummary swagger docs ") with the patch below, swagger validation should pass: ```patch diff --git a/api/swagger.yaml b/api/swagger.yaml index fa71b38f09..efa2b81587 100644 --- a/api/swagger.yaml +++ b/api/swagger.yaml @@ -5261,7 +5261,9 @@ paths: 200: description: "no error" schema: - $ref: "#/definitions/ContainerSummary" + type: "array" + items: + $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" ```
thaJeztah
4,553
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
Thanks for your suggestion, looks perfect. I just applied your patch, let's see what the CI will find :)
gesellix
4,554
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
I hope adding the `x-nullable: false` and the `required` contrains don't break anything - or do they?
gesellix
4,555
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
I _think_ `x-nullable: false` would be the default, or is it not? If so, I guess it can be removed 🤔
thaJeztah
4,556
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
Perhaps use the same name as the Go type (`PluginPrivilege`)? https://github.com/moby/moby/blob/7b9275c0da707b030e62c96b679a976f31f929d3/api/types/plugin_responses.go#L48-L57
thaJeztah
4,557
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
Extraction of this type definition should probably better be done in a separate commit
rvolosatovs
4,558
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
It's not very clear to me what this does given the description
rvolosatovs
4,559
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
Does it make sense to leave it for a separate PR? That could (a bit) simplify the review at least
rvolosatovs
4,560
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
This one was taken from the CLI docs (see https://github.com/docker/cli/blob/c758c3e4a5a980cf0ea3292c958fd537822ba0d5/docs/reference/commandline/import.md#description), trying to keep some consistency. Yet, the cli docs are more specific regarding the supported Dockerfile commands. I changed the description to be more detailed.
gesellix
4,561
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
I've removed the change and will create another PR.
gesellix
4,562
moby/moby
42,621
Improve swagger.yaml to match the 1.41 api version
I recently tried to rely on the swagger.yaml for code generation of an api client. Some parts required manual fixes of the swagger docs to match the current api version 1.41. All changes described below relate to a dedicated commit, because they aim at different parts of the api description and aren't releated per se. If you'd prefer to merge them all into a single commit, please leave a note. **- What I did** 1. Add RestartPolicy "no" to swagger docs 2. Add "changes" query parameter for /image/create to swagger docs 3. Fix ContainerSummary swagger docs (flattened) 4. Use explicit object names for improved swagger based code generation (otherwise generic names had been generated) 5. Fix swagger docs to match the [opencontainers image-spec](https://github.com/opencontainers/image-spec/blob/5ced465cc63831baf25330431a428a9d9444e192/specs-go/v1/descriptor.go) **- How I did it** Used the 1.41 swagger.yaml with swagger's codegen to generate Java code. Used that code to perform requests to a 1.41 Docker engine. Applied some fixes and `hack/validate/swagger` and `make swagger-docs` to validate the changes. **- How to verify it** Compare the actual 1.41 api with the swagger.yaml. **- Description for the changelog** Update the swagger.yaml to match the version 1.41 api. **- A picture of a cute animal (not mandatory but encouraged)** ![062321_JC_mountain-cat_feat](https://user-images.githubusercontent.com/432791/125209896-77588e80-e29c-11eb-9a5f-439f5f04a69b.jpeg)
null
2021-07-11 21:09:21+00:00
2021-08-21 22:28:57+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `no` Do not automatically restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "no" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false PluginPrivilegeItem: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" properties: Name: type: "string" example: "network" Description: type: "string" Value: type: "array" items: type: "string" example: - "host" Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "shared-size" in: "query" description: "Compute and show shared size as a `SharedSize` field on each image." type: "boolean" default: false - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "changes" in: "query" description: | Apply `Dockerfile` instructions to the image that is created, for example: `changes=ENV DEBUG=true`. Note that `ENV DEBUG=true` should be URI component encoded. Supported `Dockerfile` instructions: `CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR` type: "array" items: type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "type" in: "query" description: | Object types, for which to compute and return data. type: "array" collectionFormat: multi items: type: "string" enum: ["container", "image", "volume", "build-cache"] tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" title: "ExecConfig" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" title: "ExecStartConfig" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" title: "NetworkCreateRequest" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkConnectRequest" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" title: "NetworkDisconnectRequest" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: $ref: "#/definitions/PluginPrivilegeItem" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmInitRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmJoinRequest" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" title: "SwarmUnlockRequest" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: mediaType: type: "string" size: type: "integer" format: "int64" digest: type: "string" urls: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: architecture: type: "string" os: type: "string" os.version: type: "string" os.features: type: "array" items: type: "string" variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - architecture: "amd64" os: "linux" os.version: "" os.features: - "" variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
gesellix
41568dfc665ad4033d7fc860a5dac8102915008e
9bc0c4903f7f02ce287b9918f64795368e507f9d
Looks like this one can be removed now. I was working on some other changes, and can include that in a follow-up
thaJeztah
4,563
moby/moby
42,616
replace pkg/signal with moby/sys/signal v0.5.0
This code was moved to the moby/sys repository relates to ~https://github.com/moby/sys/pull/69~ https://github.com/moby/sys/pull/70 relates to https://github.com/moby/moby/pull/42641 relates to https://github.com/containerd/containerd/issues/5402 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-09 22:14:09+00:00
2021-07-26 17:47:28+00:00
vendor.conf
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, and github.com/moby/sys/symlink modules. Our vendoring # tool (vndr) currently does not support submodules / vendoring sub-paths, so we vendor # the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys b0f1fd7235275d01bd35cc4421e884e522395f45 # mountinfo/v0.4.1 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, github.com/moby/sys/signal, and github.com/moby/sys/symlink # modules. Our vendoring tool (vndr) currently does not support submodules / vendoring sub-paths, # so we vendor the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, `signal/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys 9b0136d132d8e0d1c116a38d7ec9af70d3a59536 # signal/v0.5.0 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
thaJeztah
9674540ccff358c3cd84cc2f33c3503e0dab7fb7
12f1b3ce43fe4aea5a41750bcc20f2a7dd67dbfc
Need to tag a release for this; would `v0.5.0` be good for this one? or are we comfortable that `v1.0.0` is ok?
thaJeztah
4,564
moby/moby
42,616
replace pkg/signal with moby/sys/signal v0.5.0
This code was moved to the moby/sys repository relates to ~https://github.com/moby/sys/pull/69~ https://github.com/moby/sys/pull/70 relates to https://github.com/moby/moby/pull/42641 relates to https://github.com/containerd/containerd/issues/5402 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-09 22:14:09+00:00
2021-07-26 17:47:28+00:00
vendor.conf
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, and github.com/moby/sys/symlink modules. Our vendoring # tool (vndr) currently does not support submodules / vendoring sub-paths, so we vendor # the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys b0f1fd7235275d01bd35cc4421e884e522395f45 # mountinfo/v0.4.1 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, github.com/moby/sys/signal, and github.com/moby/sys/symlink # modules. Our vendoring tool (vndr) currently does not support submodules / vendoring sub-paths, # so we vendor the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, `signal/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys 9b0136d132d8e0d1c116a38d7ec9af70d3a59536 # signal/v0.5.0 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
thaJeztah
9674540ccff358c3cd84cc2f33c3503e0dab7fb7
12f1b3ce43fe4aea5a41750bcc20f2a7dd67dbfc
0.5.0 is good.
cpuguy83
4,565
moby/moby
42,616
replace pkg/signal with moby/sys/signal v0.5.0
This code was moved to the moby/sys repository relates to ~https://github.com/moby/sys/pull/69~ https://github.com/moby/sys/pull/70 relates to https://github.com/moby/moby/pull/42641 relates to https://github.com/containerd/containerd/issues/5402 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-09 22:14:09+00:00
2021-07-26 17:47:28+00:00
vendor.conf
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, and github.com/moby/sys/symlink modules. Our vendoring # tool (vndr) currently does not support submodules / vendoring sub-paths, so we vendor # the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys b0f1fd7235275d01bd35cc4421e884e522395f45 # mountinfo/v0.4.1 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, github.com/moby/sys/signal, and github.com/moby/sys/symlink # modules. Our vendoring tool (vndr) currently does not support submodules / vendoring sub-paths, # so we vendor the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, `signal/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys 9b0136d132d8e0d1c116a38d7ec9af70d3a59536 # signal/v0.5.0 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
thaJeztah
9674540ccff358c3cd84cc2f33c3503e0dab7fb7
12f1b3ce43fe4aea5a41750bcc20f2a7dd67dbfc
v0.5.0 sgtm
AkihiroSuda
4,566
moby/moby
42,616
replace pkg/signal with moby/sys/signal v0.5.0
This code was moved to the moby/sys repository relates to ~https://github.com/moby/sys/pull/69~ https://github.com/moby/sys/pull/70 relates to https://github.com/moby/moby/pull/42641 relates to https://github.com/containerd/containerd/issues/5402 **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-09 22:14:09+00:00
2021-07-26 17:47:28+00:00
vendor.conf
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, and github.com/moby/sys/symlink modules. Our vendoring # tool (vndr) currently does not support submodules / vendoring sub-paths, so we vendor # the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys b0f1fd7235275d01bd35cc4421e884e522395f45 # mountinfo/v0.4.1 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
github.com/Azure/go-ansiterm d185dfc1b5a126116ea5a19e148e29d16b4574c9 github.com/Microsoft/hcsshim 3ad51c76263bad09548a40e1996960814a12a870 # v0.8.20 github.com/Microsoft/go-winio 5c2e05d71961716a6c392a06ada435aaf5d5302c # v0.4.19 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a github.com/golang/gddo 72a348e765d293ed6d1ded7b699591f14d6cd921 github.com/google/uuid 0cd6bf5da1e1c83f8b45653022c74f71af0538a4 # v1.1.1 github.com/gorilla/mux 98cb6bf42e086f6af920b965c38cacc07402d51b # v1.8.0 github.com/moby/locker 281af2d563954745bea9d1487c965f24d30742fe # v1.0.1 github.com/moby/term 3f7ff695adc6a35abc925370dd0a4dafb48ec64d # Note that this dependency uses submodules, providing the github.com/moby/sys/mount, # github.com/moby/sys/mountinfo, github.com/moby/sys/signal, and github.com/moby/sys/symlink # modules. Our vendoring tool (vndr) currently does not support submodules / vendoring sub-paths, # so we vendor the top-level moby/sys repository (which contains both) and pick the most recent tag, # which could be either `mountinfo/vX.Y.Z`, `mount/vX.Y.Z`, `signal/vX.Y.Z`, or `symlink/vX.Y.Z`. github.com/moby/sys 9b0136d132d8e0d1c116a38d7ec9af70d3a59536 # signal/v0.5.0 github.com/creack/pty 2a38352e8b4d7ab6c336eef107e42a55e72e7fbc # v1.1.11 github.com/sirupsen/logrus bdc0db8ead3853c56b7cd1ac2ba4e11b47d7da6b # v1.8.1 github.com/tchap/go-patricia a7f0089c6f496e8e70402f61733606daa326cac5 # v2.3.0 golang.org/x/net e18ecbb051101a46fc263334b127c89bc7bff7ea golang.org/x/sys d19ff857e887eacb631721f188c7d365c2331456 github.com/docker/go-units 519db1ee28dcc9fd2474ae59fca29a810482bfb1 # v0.4.0 github.com/docker/go-connections 7395e3f8aa162843a74ed6d48e79627d9792ac55 # v0.4.0 golang.org/x/text 23ae387dee1f90d29a23c0e87ee0b46038fbed0e # v0.3.3 gotest.tools/v3 568bc57cc5c19a2ef85e5749870b49a4cc2ab54d # v3.0.3 github.com/google/go-cmp 3af367b6b30c263d47e8895973edcca9a49cf029 # v0.2.0 github.com/syndtr/gocapability 42c35b4376354fd554efc7ad35e0b7f94e3a0ffb github.com/RackSec/srslog a4725f04ec91af1a91b380da679d6e0c2f061e59 github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6721b191b0369 # v0.3.8 golang.org/x/sync 036812b2e83c0ddf193dd5a34e034151da389d09 # buildkit github.com/moby/buildkit 9f254e18360a24c2ae47b26f772c3c89533bcbb7 # master / v0.9.0-dev github.com/tonistiigi/fsutil d72af97c0eaf93c1d20360e3cb9c63c223675b83 github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2 github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746 github.com/opentracing/opentracing-go d34af3eaa63c4d08ab54863a4bdd0daa45212e12 # v1.2.0 github.com/google/shlex e7afc7fbc51079733e9468cdfd1efcd7d196cd1d github.com/opentracing-contrib/go-stdlib 8a6ff1ad1691a29e4f7b5d46604f97634997c8c4 # v1.0.0 github.com/mitchellh/hashstructure a38c50148365edc8df43c1580c48fb2b3a1e9cd7 # v1.0.0 github.com/gofrs/flock 6caa7350c26b838538005fae7dbee4e69d9398db # v0.7.3 github.com/grpc-ecosystem/go-grpc-middleware 3c51f7f332123e8be5a157c0802a228ac85bf9db # v1.2.0 # libnetwork github.com/docker/go-events e31b211e4f1cd09aa76fe4ac244571fab96ae47f github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80 github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b github.com/hashicorp/memberlist 619135cdd9e5dda8c12f8ceef39bdade4f5899b6 # v0.2.4 github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372 github.com/hashicorp/errwrap 8a6fb523712970c966eefc6b39ed2c5e74880354 # v1.0.0 github.com/hashicorp/go-sockaddr c7188e74f6acae5a989bdc959aa779f8b9f42faf # v1.0.2 github.com/hashicorp/go-multierror 886a7fbe3eb1c874d46f623bfa70af45f425b3d1 # v1.0.0 github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870 github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b github.com/vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202 github.com/vishvananda/netlink f049be6f391489d3f374498fe0c8df8449258372 # v1.1.0 github.com/moby/ipvs 4566ccea0e08d68e9614c3e7a64a23b850c4bb35 # v1.0.1 github.com/google/btree 479b5e81b0a93ec038d201b0b33d17db599531d3 # v1.0.1 github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374 github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d github.com/coreos/etcd 2c834459e1aab78a5d5219c7dfe42335fc4b617a # v3.3.25 github.com/coreos/go-semver 8ab6407b697782a06568d4b7f1db25550ec2e4c6 # v0.2.0 github.com/hashicorp/consul 9a9cc9341bb487651a0399e3fc5e1e8a42e62dd9 # v0.5.2 github.com/miekg/dns 6c0c4e6581f8e173cc562c8b3363ab984e4ae071 # v1.1.27 github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be go.etcd.io/bbolt 232d8fc87f50244f9c808f4745759e08a304c029 # v1.3.5 github.com/json-iterator/go a1ca0830781e007c66b225121d2cdb3a649421f6 # v1.1.10 github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3 github.com/modern-go/reflect2 94122c33edd36123c84d5368cfb2b69df93a0ec8 # v1.0.1 # get graph and distribution packages github.com/docker/distribution 0d3efadf0154c2b8a4e7b6621fff9809655cc580 github.com/vbatts/tar-split 620714a4c508c880ac1bdda9c8370a2b19af1a55 # v0.11.1 github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0 # get go-zfs packages github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb google.golang.org/grpc f495f5b15ae7ccda3b38c53a1bfcde4c1a58a2bc # v1.27.1 # The version of runc should match the version that is used by the containerd # version that is used. If you need to update runc, open a pull request in # the containerd project first, and update both after that is merged. # This commit does not need to match RUNC_COMMIT as it is used for helper # packages but should be newer or equal. github.com/opencontainers/runc 4144b63817ebcc5b358fc2c8ef95f7cddd709aa7 # v1.0.1 github.com/opencontainers/runtime-spec 1c3f411f041711bbeecf35ff7e93461ea6789220 # v1.0.3-0.20210326190908-1c3f411f0417 github.com/opencontainers/image-spec d60099175f88c47cd379c4738d158884749ed235 # v1.0.1 github.com/cyphar/filepath-securejoin a261ee33d7a517f054effbf451841abaafe3e0fd # v0.2.2 # go-systemd v17 is required by github.com/coreos/pkg/capnslog/journald_formatter.go github.com/coreos/go-systemd 39ca1b05acc7ad1220e09f133283b8859a8b71ab # v17 # systemd integration (journald, daemon/listeners, containerd/cgroups) github.com/coreos/go-systemd/v22 777e73a89cef78631ccaa97f53a9bae67e166186 # v22.3.2 github.com/godbus/dbus/v5 c88335c0b1d28a30e7fc76d526a06154b85e5d97 # v5.0.4 # gelf logging driver deps github.com/Graylog2/go-gelf 1550ee647df0510058c9d67a45c56f18911d80b8 # v2 branch # fluent-logger-golang deps github.com/fluent/fluent-logger-golang b9b7fb02ccfee8ba4e69aa87386820c2bf24fd11 # v1.6.1 github.com/philhofer/fwd bb6d471dc95d4fe11e432687f8b70ff496cf3136 # v1.0.0 github.com/tinylib/msgp af6442a0fcf6e2a1b824f70dd0c734f01e817751 # v1.1.0 # fsnotify github.com/fsnotify/fsnotify 45d7d09e39ef4ac08d493309fa031790c15bfe8a # v1.4.9 # awslogs deps github.com/aws/aws-sdk-go 2590bc875c54c9fda225d8e4e56a9d28d90c6a47 # v1.28.11 github.com/jmespath/go-jmespath 2d053f87d1d7f9f48196ae04cf3daea4273d207d # v0.3.0 # logentries github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf # gcplogs deps golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303 google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0 github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298 go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3 cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3 github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5 google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8 # containerd github.com/containerd/containerd 69107e47a62e1d690afa2b9b1d43f8ece3ff4483 # v1.5.4 github.com/containerd/fifo 650e8a8a179d040123db61f016cb133143e7a581 # v1.0.0 github.com/containerd/continuity bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6 # v0.1.0 github.com/containerd/cgroups b9de8a2212026c07cec67baf3323f1fc0121e048 # v1.0.1 github.com/containerd/console 2f1e3d2b6afd18e8b2077816c711205a0b4d8769 # v1.0.2 github.com/containerd/go-runc 16b287bc67d069a60fa48db15f330b790b74365b # v1.0.0 github.com/containerd/typeurl 5e43fb8b75ed2f2305fc04e6918c8d10636771bc # v1.0.2 github.com/containerd/ttrpc bfba540dc45464586c106b1f31c8547933c1eb41 # v1.0.2 github.com/gogo/googleapis 01e0f9cca9b92166042241267ee2a5cdf5cff46c # v1.3.2 github.com/cilium/ebpf ca492085341e0e917f48ec30704d5054c5d42ca8 # v0.6.2 github.com/klauspost/compress a3b7545c88eea469c2246bee0e6c130525d56190 # v1.11.13 github.com/pelletier/go-toml 65ca8064882c8c308e5c804c5d5443d409e0738c # v1.8.1 # cluster github.com/docker/swarmkit 2dcf70aafdc9ea55af3aaaeca440638cde0ecda6 # master github.com/gogo/protobuf b03c65ea87cdc3521ede29f62fe3ce239267c1bc # v1.3.2 github.com/golang/protobuf 84668698ea25b64748563aa20726db66a6b8d299 # v1.3.5 github.com/cloudflare/cfssl 5d63dbd981b5c408effbb58c442d54761ff94fbd # 1.3.2 github.com/fernet/fernet-go 9eac43b88a5efb8651d24de9b68e87567e029736 github.com/google/certificate-transparency-go 37a384cd035e722ea46e55029093e26687138edf # v1.0.20 golang.org/x/crypto 0c34fe9e7dc2486962ef9867e3edb3503537209f golang.org/x/time 3af7569d3a1e776fc2a3c1cec133b43105ea9c2e github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git github.com/hashicorp/golang-lru 7f827b33c0f158ec5dfbba01bb0b14a4541fd81d # v0.5.3 github.com/coreos/pkg 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 # v4 code.cloudfoundry.org/clock 02e53af36e6c978af692887ed449b74026d76fec # v1.0.0 # prometheus github.com/prometheus/client_golang 6edbbd9e560190e318cdc5b4d3e630b442858380 # v1.6.0 github.com/beorn7/perks 37c8de3658fcb183f997c4e13e8337516ab753e6 # v1.0.1 github.com/prometheus/client_model 7bc5445566f0fe75b15de23e6b93886e982d7bf9 # v0.2.0 github.com/prometheus/common d978bcb1309602d68bb4ba69cf3f8ed900e07308 # v0.9.1 github.com/prometheus/procfs 46159f73e74d1cb8dc223deef9b2d049286f46b1 # v0.0.11 github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c # v1.0.1 github.com/pkg/errors 614d223910a179a466c1767a985424175c39b465 # v0.9.1 github.com/grpc-ecosystem/go-grpc-prometheus c225b8c3b01faf2899099b768856a9e916e5087b # v1.2.0 github.com/cespare/xxhash/v2 d7df74196a9e781ede915320c11c378c1b2f3a1f # v2.1.1 # cli github.com/spf13/cobra 8380ddd3132bdf8fd77731725b550c181dda0aa8 # v1.1.3 github.com/spf13/pflag 2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab # v1.0.5 github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 # v1.0.0 github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b # v1.0.0 # metrics github.com/docker/go-metrics b619b3592b65de4f087d9f16863a7e6ff905973c # v0.0.1 github.com/opencontainers/selinux 76bc82e11d854d3e40c08889d13c98abcea72ea2 # v1.8.2 github.com/bits-and-blooms/bitset 59de210119f50cedaa42d175dc88b6335fcf63f6 # v1.2.0 # archive/tar # rm -rf vendor/archive # mkdir -p ./vendor/archive # git clone -b go$GOLANG_VERSION --depth=1 git://github.com/golang/go.git ./go # git --git-dir ./go/.git --work-tree ./go am ../patches/0001-archive-tar-do-not-populate-user-group-names.patch # cp -a go/src/archive/tar ./vendor/archive/tar # rm -rf ./go # vndr -whitelist=^archive/tar # DO NOT EDIT BELOW THIS LINE -------- reserved for downstream projects --------
thaJeztah
9674540ccff358c3cd84cc2f33c3503e0dab7fb7
12f1b3ce43fe4aea5a41750bcc20f2a7dd67dbfc
tagged v0.5.0 👍 https://github.com/moby/sys/releases/tag/signal%2Fv0.5.0
thaJeztah
4,567
moby/moby
42,614
updated names-generator.go for alphabetization
re ordered some entries so they are in proper alphabetical order <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** alphabetize some entries **- How I did it** github editor **- How to verify it** **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> Updated entries in names-generator.go for proper alphabetization **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-09 19:48:56+00:00
2021-07-14 22:50:48+00:00
pkg/namesgenerator/names-generator.go
package namesgenerator // import "github.com/docker/docker/pkg/namesgenerator" import ( "fmt" "math/rand" ) var ( left = [...]string{ "admiring", "adoring", "affectionate", "agitated", "amazing", "angry", "awesome", "beautiful", "blissful", "bold", "boring", "brave", "busy", "charming", "clever", "cool", "compassionate", "competent", "condescending", "confident", "cranky", "crazy", "dazzling", "determined", "distracted", "dreamy", "eager", "ecstatic", "elastic", "elated", "elegant", "eloquent", "epic", "exciting", "fervent", "festive", "flamboyant", "focused", "friendly", "frosty", "funny", "gallant", "gifted", "goofy", "gracious", "great", "happy", "hardcore", "heuristic", "hopeful", "hungry", "infallible", "inspiring", "interesting", "intelligent", "jolly", "jovial", "keen", "kind", "laughing", "loving", "lucid", "magical", "mystifying", "modest", "musing", "naughty", "nervous", "nice", "nifty", "nostalgic", "objective", "optimistic", "peaceful", "pedantic", "pensive", "practical", "priceless", "quirky", "quizzical", "recursing", "relaxed", "reverent", "romantic", "sad", "serene", "sharp", "silly", "sleepy", "stoic", "strange", "stupefied", "suspicious", "sweet", "tender", "thirsty", "trusting", "unruffled", "upbeat", "vibrant", "vigilant", "vigorous", "wizardly", "wonderful", "xenodochial", "youthful", "zealous", "zen", } // Docker, starting from 0.7.x, generates names from notable scientists and hackers. // Please, for any amazing man that you add to the list, consider adding an equally amazing woman to it, and vice versa. right = [...]string{ // Muhammad ibn Jābir al-Ḥarrānī al-Battānī was a founding father of astronomy. https://en.wikipedia.org/wiki/Mu%E1%B8%A5ammad_ibn_J%C4%81bir_al-%E1%B8%A4arr%C4%81n%C4%AB_al-Batt%C4%81n%C4%AB "albattani", // Frances E. Allen, became the first female IBM Fellow in 1989. In 2006, she became the first female recipient of the ACM's Turing Award. https://en.wikipedia.org/wiki/Frances_E._Allen "allen", // June Almeida - Scottish virologist who took the first pictures of the rubella virus - https://en.wikipedia.org/wiki/June_Almeida "almeida", // Kathleen Antonelli, American computer programmer and one of the six original programmers of the ENIAC - https://en.wikipedia.org/wiki/Kathleen_Antonelli "antonelli", // Maria Gaetana Agnesi - Italian mathematician, philosopher, theologian and humanitarian. She was the first woman to write a mathematics handbook and the first woman appointed as a Mathematics Professor at a University. https://en.wikipedia.org/wiki/Maria_Gaetana_Agnesi "agnesi", // Archimedes was a physicist, engineer and mathematician who invented too many things to list them here. https://en.wikipedia.org/wiki/Archimedes "archimedes", // Maria Ardinghelli - Italian translator, mathematician and physicist - https://en.wikipedia.org/wiki/Maria_Ardinghelli "ardinghelli", // Aryabhata - Ancient Indian mathematician-astronomer during 476-550 CE https://en.wikipedia.org/wiki/Aryabhata "aryabhata", // Wanda Austin - Wanda Austin is the President and CEO of The Aerospace Corporation, a leading architect for the US security space programs. https://en.wikipedia.org/wiki/Wanda_Austin "austin", // Charles Babbage invented the concept of a programmable computer. https://en.wikipedia.org/wiki/Charles_Babbage. "babbage", // Stefan Banach - Polish mathematician, was one of the founders of modern functional analysis. https://en.wikipedia.org/wiki/Stefan_Banach "banach", // Buckaroo Banzai and his mentor Dr. Hikita perfected the "oscillation overthruster", a device that allows one to pass through solid matter. - https://en.wikipedia.org/wiki/The_Adventures_of_Buckaroo_Banzai_Across_the_8th_Dimension "banzai", // John Bardeen co-invented the transistor - https://en.wikipedia.org/wiki/John_Bardeen "bardeen", // Jean Bartik, born Betty Jean Jennings, was one of the original programmers for the ENIAC computer. https://en.wikipedia.org/wiki/Jean_Bartik "bartik", // Laura Bassi, the world's first female professor https://en.wikipedia.org/wiki/Laura_Bassi "bassi", // Hugh Beaver, British engineer, founder of the Guinness Book of World Records https://en.wikipedia.org/wiki/Hugh_Beaver "beaver", // Alexander Graham Bell - an eminent Scottish-born scientist, inventor, engineer and innovator who is credited with inventing the first practical telephone - https://en.wikipedia.org/wiki/Alexander_Graham_Bell "bell", // Karl Friedrich Benz - a German automobile engineer. Inventor of the first practical motorcar. https://en.wikipedia.org/wiki/Karl_Benz "benz", // Homi J Bhabha - was an Indian nuclear physicist, founding director, and professor of physics at the Tata Institute of Fundamental Research. Colloquially known as "father of Indian nuclear programme"- https://en.wikipedia.org/wiki/Homi_J._Bhabha "bhabha", // Bhaskara II - Ancient Indian mathematician-astronomer whose work on calculus predates Newton and Leibniz by over half a millennium - https://en.wikipedia.org/wiki/Bh%C4%81skara_II#Calculus "bhaskara", // Sue Black - British computer scientist and campaigner. She has been instrumental in saving Bletchley Park, the site of World War II codebreaking - https://en.wikipedia.org/wiki/Sue_Black_(computer_scientist) "black", // Elizabeth Helen Blackburn - Australian-American Nobel laureate; best known for co-discovering telomerase. https://en.wikipedia.org/wiki/Elizabeth_Blackburn "blackburn", // Elizabeth Blackwell - American doctor and first American woman to receive a medical degree - https://en.wikipedia.org/wiki/Elizabeth_Blackwell "blackwell", // Niels Bohr is the father of quantum theory. https://en.wikipedia.org/wiki/Niels_Bohr. "bohr", // Kathleen Booth, she's credited with writing the first assembly language. https://en.wikipedia.org/wiki/Kathleen_Booth "booth", // Anita Borg - Anita Borg was the founding director of the Institute for Women and Technology (IWT). https://en.wikipedia.org/wiki/Anita_Borg "borg", // Satyendra Nath Bose - He provided the foundation for Bose–Einstein statistics and the theory of the Bose–Einstein condensate. - https://en.wikipedia.org/wiki/Satyendra_Nath_Bose "bose", // Katherine Louise Bouman is an imaging scientist and Assistant Professor of Computer Science at the California Institute of Technology. She researches computational methods for imaging, and developed an algorithm that made possible the picture first visualization of a black hole using the Event Horizon Telescope. - https://en.wikipedia.org/wiki/Katie_Bouman "bouman", // Evelyn Boyd Granville - She was one of the first African-American woman to receive a Ph.D. in mathematics; she earned it in 1949 from Yale University. https://en.wikipedia.org/wiki/Evelyn_Boyd_Granville "boyd", // Brahmagupta - Ancient Indian mathematician during 598-670 CE who gave rules to compute with zero - https://en.wikipedia.org/wiki/Brahmagupta#Zero "brahmagupta", // Walter Houser Brattain co-invented the transistor - https://en.wikipedia.org/wiki/Walter_Houser_Brattain "brattain", // Emmett Brown invented time travel. https://en.wikipedia.org/wiki/Emmett_Brown (thanks Brian Goff) "brown", // Linda Brown Buck - American biologist and Nobel laureate best known for her genetic and molecular analyses of the mechanisms of smell. https://en.wikipedia.org/wiki/Linda_B._Buck "buck", // Dame Susan Jocelyn Bell Burnell - Northern Irish astrophysicist who discovered radio pulsars and was the first to analyse them. https://en.wikipedia.org/wiki/Jocelyn_Bell_Burnell "burnell", // Annie Jump Cannon - pioneering female astronomer who classified hundreds of thousands of stars and created the system we use to understand stars today. https://en.wikipedia.org/wiki/Annie_Jump_Cannon "cannon", // Rachel Carson - American marine biologist and conservationist, her book Silent Spring and other writings are credited with advancing the global environmental movement. https://en.wikipedia.org/wiki/Rachel_Carson "carson", // Dame Mary Lucy Cartwright - British mathematician who was one of the first to study what is now known as chaos theory. Also known for Cartwright's theorem which finds applications in signal processing. https://en.wikipedia.org/wiki/Mary_Cartwright "cartwright", // George Washington Carver - American agricultural scientist and inventor. He was the most prominent black scientist of the early 20th century. https://en.wikipedia.org/wiki/George_Washington_Carver "carver", // Vinton Gray Cerf - American Internet pioneer, recognised as one of "the fathers of the Internet". With Robert Elliot Kahn, he designed TCP and IP, the primary data communication protocols of the Internet and other computer networks. https://en.wikipedia.org/wiki/Vint_Cerf "cerf", // Subrahmanyan Chandrasekhar - Astrophysicist known for his mathematical theory on different stages and evolution in structures of the stars. He has won nobel prize for physics - https://en.wikipedia.org/wiki/Subrahmanyan_Chandrasekhar "chandrasekhar", // Sergey Alexeyevich Chaplygin (Russian: Серге́й Алексе́евич Чаплы́гин; April 5, 1869 – October 8, 1942) was a Russian and Soviet physicist, mathematician, and mechanical engineer. He is known for mathematical formulas such as Chaplygin's equation and for a hypothetical substance in cosmology called Chaplygin gas, named after him. https://en.wikipedia.org/wiki/Sergey_Chaplygin "chaplygin", // Émilie du Châtelet - French natural philosopher, mathematician, physicist, and author during the early 1730s, known for her translation of and commentary on Isaac Newton's book Principia containing basic laws of physics. https://en.wikipedia.org/wiki/%C3%89milie_du_Ch%C3%A2telet "chatelet", // Asima Chatterjee was an Indian organic chemist noted for her research on vinca alkaloids, development of drugs for treatment of epilepsy and malaria - https://en.wikipedia.org/wiki/Asima_Chatterjee "chatterjee", // Pafnuty Chebyshev - Russian mathematician. He is known fo his works on probability, statistics, mechanics, analytical geometry and number theory https://en.wikipedia.org/wiki/Pafnuty_Chebyshev "chebyshev", // Bram Cohen - American computer programmer and author of the BitTorrent peer-to-peer protocol. https://en.wikipedia.org/wiki/Bram_Cohen "cohen", // David Lee Chaum - American computer scientist and cryptographer. Known for his seminal contributions in the field of anonymous communication. https://en.wikipedia.org/wiki/David_Chaum "chaum", // Joan Clarke - Bletchley Park code breaker during the Second World War who pioneered techniques that remained top secret for decades. Also an accomplished numismatist https://en.wikipedia.org/wiki/Joan_Clarke "clarke", // Jane Colden - American botanist widely considered the first female American botanist - https://en.wikipedia.org/wiki/Jane_Colden "colden", // Gerty Theresa Cori - American biochemist who became the third woman—and first American woman—to win a Nobel Prize in science, and the first woman to be awarded the Nobel Prize in Physiology or Medicine. Cori was born in Prague. https://en.wikipedia.org/wiki/Gerty_Cori "cori", // Seymour Roger Cray was an American electrical engineer and supercomputer architect who designed a series of computers that were the fastest in the world for decades. https://en.wikipedia.org/wiki/Seymour_Cray "cray", // This entry reflects a husband and wife team who worked together: // Joan Curran was a Welsh scientist who developed radar and invented chaff, a radar countermeasure. https://en.wikipedia.org/wiki/Joan_Curran // Samuel Curran was an Irish physicist who worked alongside his wife during WWII and invented the proximity fuse. https://en.wikipedia.org/wiki/Samuel_Curran "curran", // Marie Curie discovered radioactivity. https://en.wikipedia.org/wiki/Marie_Curie. "curie", // Charles Darwin established the principles of natural evolution. https://en.wikipedia.org/wiki/Charles_Darwin. "darwin", // Leonardo Da Vinci invented too many things to list here. https://en.wikipedia.org/wiki/Leonardo_da_Vinci. "davinci", // A. K. (Alexander Keewatin) Dewdney, Canadian mathematician, computer scientist, author and filmmaker. Contributor to Scientific American's "Computer Recreations" from 1984 to 1991. Author of Core War (program), The Planiverse, The Armchair Universe, The Magic Machine, The New Turing Omnibus, and more. https://en.wikipedia.org/wiki/Alexander_Dewdney "dewdney", // Satish Dhawan - Indian mathematician and aerospace engineer, known for leading the successful and indigenous development of the Indian space programme. https://en.wikipedia.org/wiki/Satish_Dhawan "dhawan", // Bailey Whitfield Diffie - American cryptographer and one of the pioneers of public-key cryptography. https://en.wikipedia.org/wiki/Whitfield_Diffie "diffie", // Edsger Wybe Dijkstra was a Dutch computer scientist and mathematical scientist. https://en.wikipedia.org/wiki/Edsger_W._Dijkstra. "dijkstra", // Paul Adrien Maurice Dirac - English theoretical physicist who made fundamental contributions to the early development of both quantum mechanics and quantum electrodynamics. https://en.wikipedia.org/wiki/Paul_Dirac "dirac", // Agnes Meyer Driscoll - American cryptanalyst during World Wars I and II who successfully cryptanalysed a number of Japanese ciphers. She was also the co-developer of one of the cipher machines of the US Navy, the CM. https://en.wikipedia.org/wiki/Agnes_Meyer_Driscoll "driscoll", // Donna Dubinsky - played an integral role in the development of personal digital assistants (PDAs) serving as CEO of Palm, Inc. and co-founding Handspring. https://en.wikipedia.org/wiki/Donna_Dubinsky "dubinsky", // Annie Easley - She was a leading member of the team which developed software for the Centaur rocket stage and one of the first African-Americans in her field. https://en.wikipedia.org/wiki/Annie_Easley "easley", // Thomas Alva Edison, prolific inventor https://en.wikipedia.org/wiki/Thomas_Edison "edison", // Albert Einstein invented the general theory of relativity. https://en.wikipedia.org/wiki/Albert_Einstein "einstein", // Alexandra Asanovna Elbakyan (Russian: Алекса́ндра Аса́новна Элбакя́н) is a Kazakhstani graduate student, computer programmer, internet pirate in hiding, and the creator of the site Sci-Hub. Nature has listed her in 2016 in the top ten people that mattered in science, and Ars Technica has compared her to Aaron Swartz. - https://en.wikipedia.org/wiki/Alexandra_Elbakyan "elbakyan", // Taher A. ElGamal - Egyptian cryptographer best known for the ElGamal discrete log cryptosystem and the ElGamal digital signature scheme. https://en.wikipedia.org/wiki/Taher_Elgamal "elgamal", // Gertrude Elion - American biochemist, pharmacologist and the 1988 recipient of the Nobel Prize in Medicine - https://en.wikipedia.org/wiki/Gertrude_Elion "elion", // James Henry Ellis - British engineer and cryptographer employed by the GCHQ. Best known for conceiving for the first time, the idea of public-key cryptography. https://en.wikipedia.org/wiki/James_H._Ellis "ellis", // Douglas Engelbart gave the mother of all demos: https://en.wikipedia.org/wiki/Douglas_Engelbart "engelbart", // Euclid invented geometry. https://en.wikipedia.org/wiki/Euclid "euclid", // Leonhard Euler invented large parts of modern mathematics. https://de.wikipedia.org/wiki/Leonhard_Euler "euler", // Michael Faraday - British scientist who contributed to the study of electromagnetism and electrochemistry. https://en.wikipedia.org/wiki/Michael_Faraday "faraday", // Horst Feistel - German-born American cryptographer who was one of the earliest non-government researchers to study the design and theory of block ciphers. Co-developer of DES and Lucifer. Feistel networks, a symmetric structure used in the construction of block ciphers are named after him. https://en.wikipedia.org/wiki/Horst_Feistel "feistel", // Pierre de Fermat pioneered several aspects of modern mathematics. https://en.wikipedia.org/wiki/Pierre_de_Fermat "fermat", // Enrico Fermi invented the first nuclear reactor. https://en.wikipedia.org/wiki/Enrico_Fermi. "fermi", // Richard Feynman was a key contributor to quantum mechanics and particle physics. https://en.wikipedia.org/wiki/Richard_Feynman "feynman", // Benjamin Franklin is famous for his experiments in electricity and the invention of the lightning rod. "franklin", // Yuri Alekseyevich Gagarin - Soviet pilot and cosmonaut, best known as the first human to journey into outer space. https://en.wikipedia.org/wiki/Yuri_Gagarin "gagarin", // Galileo was a founding father of modern astronomy, and faced politics and obscurantism to establish scientific truth. https://en.wikipedia.org/wiki/Galileo_Galilei "galileo", // Évariste Galois - French mathematician whose work laid the foundations of Galois theory and group theory, two major branches of abstract algebra, and the subfield of Galois connections, all while still in his late teens. https://en.wikipedia.org/wiki/%C3%89variste_Galois "galois", // Kadambini Ganguly - Indian physician, known for being the first South Asian female physician, trained in western medicine, to graduate in South Asia. https://en.wikipedia.org/wiki/Kadambini_Ganguly "ganguly", // William Henry "Bill" Gates III is an American business magnate, philanthropist, investor, computer programmer, and inventor. https://en.wikipedia.org/wiki/Bill_Gates "gates", // Johann Carl Friedrich Gauss - German mathematician who made significant contributions to many fields, including number theory, algebra, statistics, analysis, differential geometry, geodesy, geophysics, mechanics, electrostatics, magnetic fields, astronomy, matrix theory, and optics. https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "gauss", // Marie-Sophie Germain - French mathematician, physicist and philosopher. Known for her work on elasticity theory, number theory and philosophy. https://en.wikipedia.org/wiki/Sophie_Germain "germain", // Adele Goldberg, was one of the designers and developers of the Smalltalk language. https://en.wikipedia.org/wiki/Adele_Goldberg_(computer_scientist) "goldberg", // Adele Goldstine, born Adele Katz, wrote the complete technical description for the first electronic digital computer, ENIAC. https://en.wikipedia.org/wiki/Adele_Goldstine "goldstine", // Shafi Goldwasser is a computer scientist known for creating theoretical foundations of modern cryptography. Winner of 2012 ACM Turing Award. https://en.wikipedia.org/wiki/Shafi_Goldwasser "goldwasser", // James Golick, all around gangster. "golick", // Jane Goodall - British primatologist, ethologist, and anthropologist who is considered to be the world's foremost expert on chimpanzees - https://en.wikipedia.org/wiki/Jane_Goodall "goodall", // Stephen Jay Gould was was an American paleontologist, evolutionary biologist, and historian of science. He is most famous for the theory of punctuated equilibrium - https://en.wikipedia.org/wiki/Stephen_Jay_Gould "gould", // Carolyn Widney Greider - American molecular biologist and joint winner of the 2009 Nobel Prize for Physiology or Medicine for the discovery of telomerase. https://en.wikipedia.org/wiki/Carol_W._Greider "greider", // Alexander Grothendieck - German-born French mathematician who became a leading figure in the creation of modern algebraic geometry. https://en.wikipedia.org/wiki/Alexander_Grothendieck "grothendieck", // Lois Haibt - American computer scientist, part of the team at IBM that developed FORTRAN - https://en.wikipedia.org/wiki/Lois_Haibt "haibt", // Margaret Hamilton - Director of the Software Engineering Division of the MIT Instrumentation Laboratory, which developed on-board flight software for the Apollo space program. https://en.wikipedia.org/wiki/Margaret_Hamilton_(scientist) "hamilton", // Caroline Harriet Haslett - English electrical engineer, electricity industry administrator and champion of women's rights. Co-author of British Standard 1363 that specifies AC power plugs and sockets used across the United Kingdom (which is widely considered as one of the safest designs). https://en.wikipedia.org/wiki/Caroline_Haslett "haslett", // Stephen Hawking pioneered the field of cosmology by combining general relativity and quantum mechanics. https://en.wikipedia.org/wiki/Stephen_Hawking "hawking", // Martin Edward Hellman - American cryptologist, best known for his invention of public-key cryptography in co-operation with Whitfield Diffie and Ralph Merkle. https://en.wikipedia.org/wiki/Martin_Hellman "hellman", // Werner Heisenberg was a founding father of quantum mechanics. https://en.wikipedia.org/wiki/Werner_Heisenberg "heisenberg", // Grete Hermann was a German philosopher noted for her philosophical work on the foundations of quantum mechanics. https://en.wikipedia.org/wiki/Grete_Hermann "hermann", // Caroline Lucretia Herschel - German astronomer and discoverer of several comets. https://en.wikipedia.org/wiki/Caroline_Herschel "herschel", // Heinrich Rudolf Hertz - German physicist who first conclusively proved the existence of the electromagnetic waves. https://en.wikipedia.org/wiki/Heinrich_Hertz "hertz", // Jaroslav Heyrovský was the inventor of the polarographic method, father of the electroanalytical method, and recipient of the Nobel Prize in 1959. His main field of work was polarography. https://en.wikipedia.org/wiki/Jaroslav_Heyrovsk%C3%BD "heyrovsky", // Dorothy Hodgkin was a British biochemist, credited with the development of protein crystallography. She was awarded the Nobel Prize in Chemistry in 1964. https://en.wikipedia.org/wiki/Dorothy_Hodgkin "hodgkin", // Douglas R. Hofstadter is an American professor of cognitive science and author of the Pulitzer Prize and American Book Award-winning work Goedel, Escher, Bach: An Eternal Golden Braid in 1979. A mind-bending work which coined Hofstadter's Law: "It always takes longer than you expect, even when you take into account Hofstadter's Law." https://en.wikipedia.org/wiki/Douglas_Hofstadter "hofstadter", // Erna Schneider Hoover revolutionized modern communication by inventing a computerized telephone switching method. https://en.wikipedia.org/wiki/Erna_Schneider_Hoover "hoover", // Grace Hopper developed the first compiler for a computer programming language and is credited with popularizing the term "debugging" for fixing computer glitches. https://en.wikipedia.org/wiki/Grace_Hopper "hopper", // Frances Hugle, she was an American scientist, engineer, and inventor who contributed to the understanding of semiconductors, integrated circuitry, and the unique electrical principles of microscopic materials. https://en.wikipedia.org/wiki/Frances_Hugle "hugle", // Hypatia - Greek Alexandrine Neoplatonist philosopher in Egypt who was one of the earliest mothers of mathematics - https://en.wikipedia.org/wiki/Hypatia "hypatia", // Teruko Ishizaka - Japanese scientist and immunologist who co-discovered the antibody class Immunoglobulin E. https://en.wikipedia.org/wiki/Teruko_Ishizaka "ishizaka", // Mary Jackson, American mathematician and aerospace engineer who earned the highest title within NASA's engineering department - https://en.wikipedia.org/wiki/Mary_Jackson_(engineer) "jackson", // Yeong-Sil Jang was a Korean scientist and astronomer during the Joseon Dynasty; he invented the first metal printing press and water gauge. https://en.wikipedia.org/wiki/Jang_Yeong-sil "jang", // Mae Carol Jemison - is an American engineer, physician, and former NASA astronaut. She became the first black woman to travel in space when she served as a mission specialist aboard the Space Shuttle Endeavour - https://en.wikipedia.org/wiki/Mae_Jemison "jemison", // Betty Jennings - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Jean_Bartik "jennings", // Mary Lou Jepsen, was the founder and chief technology officer of One Laptop Per Child (OLPC), and the founder of Pixel Qi. https://en.wikipedia.org/wiki/Mary_Lou_Jepsen "jepsen", // Katherine Coleman Goble Johnson - American physicist and mathematician contributed to the NASA. https://en.wikipedia.org/wiki/Katherine_Johnson "johnson", // Irène Joliot-Curie - French scientist who was awarded the Nobel Prize for Chemistry in 1935. Daughter of Marie and Pierre Curie. https://en.wikipedia.org/wiki/Ir%C3%A8ne_Joliot-Curie "joliot", // Karen Spärck Jones came up with the concept of inverse document frequency, which is used in most search engines today. https://en.wikipedia.org/wiki/Karen_Sp%C3%A4rck_Jones "jones", // A. P. J. Abdul Kalam - is an Indian scientist aka Missile Man of India for his work on the development of ballistic missile and launch vehicle technology - https://en.wikipedia.org/wiki/A._P._J._Abdul_Kalam "kalam", // Sergey Petrovich Kapitsa (Russian: Серге́й Петро́вич Капи́ца; 14 February 1928 – 14 August 2012) was a Russian physicist and demographer. He was best known as host of the popular and long-running Russian scientific TV show, Evident, but Incredible. His father was the Nobel laureate Soviet-era physicist Pyotr Kapitsa, and his brother was the geographer and Antarctic explorer Andrey Kapitsa. - https://en.wikipedia.org/wiki/Sergey_Kapitsa "kapitsa", // Susan Kare, created the icons and many of the interface elements for the original Apple Macintosh in the 1980s, and was an original employee of NeXT, working as the Creative Director. https://en.wikipedia.org/wiki/Susan_Kare "kare", // Mstislav Keldysh - a Soviet scientist in the field of mathematics and mechanics, academician of the USSR Academy of Sciences (1946), President of the USSR Academy of Sciences (1961–1975), three times Hero of Socialist Labor (1956, 1961, 1971), fellow of the Royal Society of Edinburgh (1968). https://en.wikipedia.org/wiki/Mstislav_Keldysh "keldysh", // Mary Kenneth Keller, Sister Mary Kenneth Keller became the first American woman to earn a PhD in Computer Science in 1965. https://en.wikipedia.org/wiki/Mary_Kenneth_Keller "keller", // Johannes Kepler, German astronomer known for his three laws of planetary motion - https://en.wikipedia.org/wiki/Johannes_Kepler "kepler", // Omar Khayyam - Persian mathematician, astronomer and poet. Known for his work on the classification and solution of cubic equations, for his contribution to the understanding of Euclid's fifth postulate and for computing the length of a year very accurately. https://en.wikipedia.org/wiki/Omar_Khayyam "khayyam", // Har Gobind Khorana - Indian-American biochemist who shared the 1968 Nobel Prize for Physiology - https://en.wikipedia.org/wiki/Har_Gobind_Khorana "khorana", // Jack Kilby invented silicon integrated circuits and gave Silicon Valley its name. - https://en.wikipedia.org/wiki/Jack_Kilby "kilby", // Maria Kirch - German astronomer and first woman to discover a comet - https://en.wikipedia.org/wiki/Maria_Margarethe_Kirch "kirch", // Donald Knuth - American computer scientist, author of "The Art of Computer Programming" and creator of the TeX typesetting system. https://en.wikipedia.org/wiki/Donald_Knuth "knuth", // Sophie Kowalevski - Russian mathematician responsible for important original contributions to analysis, differential equations and mechanics - https://en.wikipedia.org/wiki/Sofia_Kovalevskaya "kowalevski", // Marie-Jeanne de Lalande - French astronomer, mathematician and cataloguer of stars - https://en.wikipedia.org/wiki/Marie-Jeanne_de_Lalande "lalande", // Hedy Lamarr - Actress and inventor. The principles of her work are now incorporated into modern Wi-Fi, CDMA and Bluetooth technology. https://en.wikipedia.org/wiki/Hedy_Lamarr "lamarr", // Leslie B. Lamport - American computer scientist. Lamport is best known for his seminal work in distributed systems and was the winner of the 2013 Turing Award. https://en.wikipedia.org/wiki/Leslie_Lamport "lamport", // Mary Leakey - British paleoanthropologist who discovered the first fossilized Proconsul skull - https://en.wikipedia.org/wiki/Mary_Leakey "leakey", // Henrietta Swan Leavitt - she was an American astronomer who discovered the relation between the luminosity and the period of Cepheid variable stars. https://en.wikipedia.org/wiki/Henrietta_Swan_Leavitt "leavitt", // Esther Miriam Zimmer Lederberg - American microbiologist and a pioneer of bacterial genetics. https://en.wikipedia.org/wiki/Esther_Lederberg "lederberg", // Inge Lehmann - Danish seismologist and geophysicist. Known for discovering in 1936 that the Earth has a solid inner core inside a molten outer core. https://en.wikipedia.org/wiki/Inge_Lehmann "lehmann", // Daniel Lewin - Mathematician, Akamai co-founder, soldier, 9/11 victim-- Developed optimization techniques for routing traffic on the internet. Died attempting to stop the 9-11 hijackers. https://en.wikipedia.org/wiki/Daniel_Lewin "lewin", // Ruth Lichterman - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Ruth_Teitelbaum "lichterman", // Barbara Liskov - co-developed the Liskov substitution principle. Liskov was also the winner of the Turing Prize in 2008. - https://en.wikipedia.org/wiki/Barbara_Liskov "liskov", // Ada Lovelace invented the first algorithm. https://en.wikipedia.org/wiki/Ada_Lovelace (thanks James Turnbull) "lovelace", // Auguste and Louis Lumière - the first filmmakers in history - https://en.wikipedia.org/wiki/Auguste_and_Louis_Lumi%C3%A8re "lumiere", // Mahavira - Ancient Indian mathematician during 9th century AD who discovered basic algebraic identities - https://en.wikipedia.org/wiki/Mah%C4%81v%C4%ABra_(mathematician) "mahavira", // Lynn Margulis (b. Lynn Petra Alexander) - an American evolutionary theorist and biologist, science author, educator, and popularizer, and was the primary modern proponent for the significance of symbiosis in evolution. - https://en.wikipedia.org/wiki/Lynn_Margulis "margulis", // Yukihiro Matsumoto - Japanese computer scientist and software programmer best known as the chief designer of the Ruby programming language. https://en.wikipedia.org/wiki/Yukihiro_Matsumoto "matsumoto", // James Clerk Maxwell - Scottish physicist, best known for his formulation of electromagnetic theory. https://en.wikipedia.org/wiki/James_Clerk_Maxwell "maxwell", // Maria Mayer - American theoretical physicist and Nobel laureate in Physics for proposing the nuclear shell model of the atomic nucleus - https://en.wikipedia.org/wiki/Maria_Mayer "mayer", // John McCarthy invented LISP: https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist) "mccarthy", // Barbara McClintock - a distinguished American cytogeneticist, 1983 Nobel Laureate in Physiology or Medicine for discovering transposons. https://en.wikipedia.org/wiki/Barbara_McClintock "mcclintock", // Anne Laura Dorinthea McLaren - British developmental biologist whose work helped lead to human in-vitro fertilisation. https://en.wikipedia.org/wiki/Anne_McLaren "mclaren", // Malcolm McLean invented the modern shipping container: https://en.wikipedia.org/wiki/Malcom_McLean "mclean", // Kay McNulty - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Kathleen_Antonelli "mcnulty", // Gregor Johann Mendel - Czech scientist and founder of genetics. https://en.wikipedia.org/wiki/Gregor_Mendel "mendel", // Dmitri Mendeleev - a chemist and inventor. He formulated the Periodic Law, created a farsighted version of the periodic table of elements, and used it to correct the properties of some already discovered elements and also to predict the properties of eight elements yet to be discovered. https://en.wikipedia.org/wiki/Dmitri_Mendeleev "mendeleev", // Lise Meitner - Austrian/Swedish physicist who was involved in the discovery of nuclear fission. The element meitnerium is named after her - https://en.wikipedia.org/wiki/Lise_Meitner "meitner", // Carla Meninsky, was the game designer and programmer for Atari 2600 games Dodge 'Em and Warlords. https://en.wikipedia.org/wiki/Carla_Meninsky "meninsky", // Ralph C. Merkle - American computer scientist, known for devising Merkle's puzzles - one of the very first schemes for public-key cryptography. Also, inventor of Merkle trees and co-inventor of the Merkle-Damgård construction for building collision-resistant cryptographic hash functions and the Merkle-Hellman knapsack cryptosystem. https://en.wikipedia.org/wiki/Ralph_Merkle "merkle", // Johanna Mestorf - German prehistoric archaeologist and first female museum director in Germany - https://en.wikipedia.org/wiki/Johanna_Mestorf "mestorf", // Maryam Mirzakhani - an Iranian mathematician and the first woman to win the Fields Medal. https://en.wikipedia.org/wiki/Maryam_Mirzakhani "mirzakhani", // Rita Levi-Montalcini - Won Nobel Prize in Physiology or Medicine jointly with colleague Stanley Cohen for the discovery of nerve growth factor (https://en.wikipedia.org/wiki/Rita_Levi-Montalcini) "montalcini", // Gordon Earle Moore - American engineer, Silicon Valley founding father, author of Moore's law. https://en.wikipedia.org/wiki/Gordon_Moore "moore", // Samuel Morse - contributed to the invention of a single-wire telegraph system based on European telegraphs and was a co-developer of the Morse code - https://en.wikipedia.org/wiki/Samuel_Morse "morse", // Ian Murdock - founder of the Debian project - https://en.wikipedia.org/wiki/Ian_Murdock "murdock", // May-Britt Moser - Nobel prize winner neuroscientist who contributed to the discovery of grid cells in the brain. https://en.wikipedia.org/wiki/May-Britt_Moser "moser", // John Napier of Merchiston - Scottish landowner known as an astronomer, mathematician and physicist. Best known for his discovery of logarithms. https://en.wikipedia.org/wiki/John_Napier "napier", // John Forbes Nash, Jr. - American mathematician who made fundamental contributions to game theory, differential geometry, and the study of partial differential equations. https://en.wikipedia.org/wiki/John_Forbes_Nash_Jr. "nash", // John von Neumann - todays computer architectures are based on the von Neumann architecture. https://en.wikipedia.org/wiki/Von_Neumann_architecture "neumann", // Isaac Newton invented classic mechanics and modern optics. https://en.wikipedia.org/wiki/Isaac_Newton "newton", // Florence Nightingale, more prominently known as a nurse, was also the first female member of the Royal Statistical Society and a pioneer in statistical graphics https://en.wikipedia.org/wiki/Florence_Nightingale#Statistics_and_sanitary_reform "nightingale", // Alfred Nobel - a Swedish chemist, engineer, innovator, and armaments manufacturer (inventor of dynamite) - https://en.wikipedia.org/wiki/Alfred_Nobel "nobel", // Emmy Noether, German mathematician. Noether's Theorem is named after her. https://en.wikipedia.org/wiki/Emmy_Noether "noether", // Poppy Northcutt. Poppy Northcutt was the first woman to work as part of NASA’s Mission Control. http://www.businessinsider.com/poppy-northcutt-helped-apollo-astronauts-2014-12?op=1 "northcutt", // Robert Noyce invented silicon integrated circuits and gave Silicon Valley its name. - https://en.wikipedia.org/wiki/Robert_Noyce "noyce", // Panini - Ancient Indian linguist and grammarian from 4th century CE who worked on the world's first formal system - https://en.wikipedia.org/wiki/P%C4%81%E1%B9%87ini#Comparison_with_modern_formal_systems "panini", // Ambroise Pare invented modern surgery. https://en.wikipedia.org/wiki/Ambroise_Par%C3%A9 "pare", // Blaise Pascal, French mathematician, physicist, and inventor - https://en.wikipedia.org/wiki/Blaise_Pascal "pascal", // Louis Pasteur discovered vaccination, fermentation and pasteurization. https://en.wikipedia.org/wiki/Louis_Pasteur. "pasteur", // Cecilia Payne-Gaposchkin was an astronomer and astrophysicist who, in 1925, proposed in her Ph.D. thesis an explanation for the composition of stars in terms of the relative abundances of hydrogen and helium. https://en.wikipedia.org/wiki/Cecilia_Payne-Gaposchkin "payne", // Radia Perlman is a software designer and network engineer and most famous for her invention of the spanning-tree protocol (STP). https://en.wikipedia.org/wiki/Radia_Perlman "perlman", // Rob Pike was a key contributor to Unix, Plan 9, the X graphic system, utf-8, and the Go programming language. https://en.wikipedia.org/wiki/Rob_Pike "pike", // Henri Poincaré made fundamental contributions in several fields of mathematics. https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9 "poincare", // Laura Poitras is a director and producer whose work, made possible by open source crypto tools, advances the causes of truth and freedom of information by reporting disclosures by whistleblowers such as Edward Snowden. https://en.wikipedia.org/wiki/Laura_Poitras "poitras", // Tat’yana Avenirovna Proskuriakova (Russian: Татья́на Авени́ровна Проскуряко́ва) (January 23 [O.S. January 10] 1909 – August 30, 1985) was a Russian-American Mayanist scholar and archaeologist who contributed significantly to the deciphering of Maya hieroglyphs, the writing system of the pre-Columbian Maya civilization of Mesoamerica. https://en.wikipedia.org/wiki/Tatiana_Proskouriakoff "proskuriakova", // Claudius Ptolemy - a Greco-Egyptian writer of Alexandria, known as a mathematician, astronomer, geographer, astrologer, and poet of a single epigram in the Greek Anthology - https://en.wikipedia.org/wiki/Ptolemy "ptolemy", // C. V. Raman - Indian physicist who won the Nobel Prize in 1930 for proposing the Raman effect. - https://en.wikipedia.org/wiki/C._V._Raman "raman", // Srinivasa Ramanujan - Indian mathematician and autodidact who made extraordinary contributions to mathematical analysis, number theory, infinite series, and continued fractions. - https://en.wikipedia.org/wiki/Srinivasa_Ramanujan "ramanujan", // Sally Kristen Ride was an American physicist and astronaut. She was the first American woman in space, and the youngest American astronaut. https://en.wikipedia.org/wiki/Sally_Ride "ride", // Dennis Ritchie - co-creator of UNIX and the C programming language. - https://en.wikipedia.org/wiki/Dennis_Ritchie "ritchie", // Ida Rhodes - American pioneer in computer programming, designed the first computer used for Social Security. https://en.wikipedia.org/wiki/Ida_Rhodes "rhodes", // Julia Hall Bowman Robinson - American mathematician renowned for her contributions to the fields of computability theory and computational complexity theory. https://en.wikipedia.org/wiki/Julia_Robinson "robinson", // Wilhelm Conrad Röntgen - German physicist who was awarded the first Nobel Prize in Physics in 1901 for the discovery of X-rays (Röntgen rays). https://en.wikipedia.org/wiki/Wilhelm_R%C3%B6ntgen "roentgen", // Rosalind Franklin - British biophysicist and X-ray crystallographer whose research was critical to the understanding of DNA - https://en.wikipedia.org/wiki/Rosalind_Franklin "rosalind", // Vera Rubin - American astronomer who pioneered work on galaxy rotation rates. https://en.wikipedia.org/wiki/Vera_Rubin "rubin", // Meghnad Saha - Indian astrophysicist best known for his development of the Saha equation, used to describe chemical and physical conditions in stars - https://en.wikipedia.org/wiki/Meghnad_Saha "saha", // Jean E. Sammet developed FORMAC, the first widely used computer language for symbolic manipulation of mathematical formulas. https://en.wikipedia.org/wiki/Jean_E._Sammet "sammet", // Mildred Sanderson - American mathematician best known for Sanderson's theorem concerning modular invariants. https://en.wikipedia.org/wiki/Mildred_Sanderson "sanderson", // Satoshi Nakamoto is the name used by the unknown person or group of people who developed bitcoin, authored the bitcoin white paper, and created and deployed bitcoin's original reference implementation. https://en.wikipedia.org/wiki/Satoshi_Nakamoto "satoshi", // Adi Shamir - Israeli cryptographer whose numerous inventions and contributions to cryptography include the Ferge Fiat Shamir identification scheme, the Rivest Shamir Adleman (RSA) public-key cryptosystem, the Shamir's secret sharing scheme, the breaking of the Merkle-Hellman cryptosystem, the TWINKLE and TWIRL factoring devices and the discovery of differential cryptanalysis (with Eli Biham). https://en.wikipedia.org/wiki/Adi_Shamir "shamir", // Claude Shannon - The father of information theory and founder of digital circuit design theory. (https://en.wikipedia.org/wiki/Claude_Shannon) "shannon", // Carol Shaw - Originally an Atari employee, Carol Shaw is said to be the first female video game designer. https://en.wikipedia.org/wiki/Carol_Shaw_(video_game_designer) "shaw", // Dame Stephanie "Steve" Shirley - Founded a software company in 1962 employing women working from home. https://en.wikipedia.org/wiki/Steve_Shirley "shirley", // William Shockley co-invented the transistor - https://en.wikipedia.org/wiki/William_Shockley "shockley", // Lina Solomonovna Stern (or Shtern; Russian: Лина Соломоновна Штерн; 26 August 1878 – 7 March 1968) was a Soviet biochemist, physiologist and humanist whose medical discoveries saved thousands of lives at the fronts of World War II. She is best known for her pioneering work on blood–brain barrier, which she described as hemato-encephalic barrier in 1921. https://en.wikipedia.org/wiki/Lina_Stern "shtern", // Françoise Barré-Sinoussi - French virologist and Nobel Prize Laureate in Physiology or Medicine; her work was fundamental in identifying HIV as the cause of AIDS. https://en.wikipedia.org/wiki/Fran%C3%A7oise_Barr%C3%A9-Sinoussi "sinoussi", // Betty Snyder - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Betty_Holberton "snyder", // Cynthia Solomon - Pioneer in the fields of artificial intelligence, computer science and educational computing. Known for creation of Logo, an educational programming language. https://en.wikipedia.org/wiki/Cynthia_Solomon "solomon", // Frances Spence - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Frances_Spence "spence", // Michael Stonebraker is a database research pioneer and architect of Ingres, Postgres, VoltDB and SciDB. Winner of 2014 ACM Turing Award. https://en.wikipedia.org/wiki/Michael_Stonebraker "stonebraker", // Ivan Edward Sutherland - American computer scientist and Internet pioneer, widely regarded as the father of computer graphics. https://en.wikipedia.org/wiki/Ivan_Sutherland "sutherland", // Janese Swanson (with others) developed the first of the Carmen Sandiego games. She went on to found Girl Tech. https://en.wikipedia.org/wiki/Janese_Swanson "swanson", // Aaron Swartz was influential in creating RSS, Markdown, Creative Commons, Reddit, and much of the internet as we know it today. He was devoted to freedom of information on the web. https://en.wikiquote.org/wiki/Aaron_Swartz "swartz", // Bertha Swirles was a theoretical physicist who made a number of contributions to early quantum theory. https://en.wikipedia.org/wiki/Bertha_Swirles "swirles", // Helen Brooke Taussig - American cardiologist and founder of the field of paediatric cardiology. https://en.wikipedia.org/wiki/Helen_B._Taussig "taussig", // Valentina Tereshkova is a Russian engineer, cosmonaut and politician. She was the first woman to fly to space in 1963. In 2013, at the age of 76, she offered to go on a one-way mission to Mars. https://en.wikipedia.org/wiki/Valentina_Tereshkova "tereshkova", // Nikola Tesla invented the AC electric system and every gadget ever used by a James Bond villain. https://en.wikipedia.org/wiki/Nikola_Tesla "tesla", // Marie Tharp - American geologist and oceanic cartographer who co-created the first scientific map of the Atlantic Ocean floor. Her work led to the acceptance of the theories of plate tectonics and continental drift. https://en.wikipedia.org/wiki/Marie_Tharp "tharp", // Ken Thompson - co-creator of UNIX and the C programming language - https://en.wikipedia.org/wiki/Ken_Thompson "thompson", // Linus Torvalds invented Linux and Git. https://en.wikipedia.org/wiki/Linus_Torvalds "torvalds", // Youyou Tu - Chinese pharmaceutical chemist and educator known for discovering artemisinin and dihydroartemisinin, used to treat malaria, which has saved millions of lives. Joint winner of the 2015 Nobel Prize in Physiology or Medicine. https://en.wikipedia.org/wiki/Tu_Youyou "tu", // Alan Turing was a founding father of computer science. https://en.wikipedia.org/wiki/Alan_Turing. "turing", // Varahamihira - Ancient Indian mathematician who discovered trigonometric formulae during 505-587 CE - https://en.wikipedia.org/wiki/Var%C4%81hamihira#Contributions "varahamihira", // Dorothy Vaughan was a NASA mathematician and computer programmer on the SCOUT launch vehicle program that put America's first satellites into space - https://en.wikipedia.org/wiki/Dorothy_Vaughan "vaughan", // Sir Mokshagundam Visvesvaraya - is a notable Indian engineer. He is a recipient of the Indian Republic's highest honour, the Bharat Ratna, in 1955. On his birthday, 15 September is celebrated as Engineer's Day in India in his memory - https://en.wikipedia.org/wiki/Visvesvaraya "visvesvaraya", // Christiane Nüsslein-Volhard - German biologist, won Nobel Prize in Physiology or Medicine in 1995 for research on the genetic control of embryonic development. https://en.wikipedia.org/wiki/Christiane_N%C3%BCsslein-Volhard "volhard", // Cédric Villani - French mathematician, won Fields Medal, Fermat Prize and Poincaré Price for his work in differential geometry and statistical mechanics. https://en.wikipedia.org/wiki/C%C3%A9dric_Villani "villani", // Marlyn Wescoff - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Marlyn_Meltzer "wescoff", // Sylvia B. Wilbur - British computer scientist who helped develop the ARPANET, was one of the first to exchange email in the UK and a leading researcher in computer-supported collaborative work. https://en.wikipedia.org/wiki/Sylvia_Wilbur "wilbur", // Andrew Wiles - Notable British mathematician who proved the enigmatic Fermat's Last Theorem - https://en.wikipedia.org/wiki/Andrew_Wiles "wiles", // Roberta Williams, did pioneering work in graphical adventure games for personal computers, particularly the King's Quest series. https://en.wikipedia.org/wiki/Roberta_Williams "williams", // Malcolm John Williamson - British mathematician and cryptographer employed by the GCHQ. Developed in 1974 what is now known as Diffie-Hellman key exchange (Diffie and Hellman first published the scheme in 1976). https://en.wikipedia.org/wiki/Malcolm_J._Williamson "williamson", // Sophie Wilson designed the first Acorn Micro-Computer and the instruction set for ARM processors. https://en.wikipedia.org/wiki/Sophie_Wilson "wilson", // Jeannette Wing - co-developed the Liskov substitution principle. - https://en.wikipedia.org/wiki/Jeannette_Wing "wing", // Steve Wozniak invented the Apple I and Apple II. https://en.wikipedia.org/wiki/Steve_Wozniak "wozniak", // The Wright brothers, Orville and Wilbur - credited with inventing and building the world's first successful airplane and making the first controlled, powered and sustained heavier-than-air human flight - https://en.wikipedia.org/wiki/Wright_brothers "wright", // Chien-Shiung Wu - Chinese-American experimental physicist who made significant contributions to nuclear physics. https://en.wikipedia.org/wiki/Chien-Shiung_Wu "wu", // Rosalyn Sussman Yalow - Rosalyn Sussman Yalow was an American medical physicist, and a co-winner of the 1977 Nobel Prize in Physiology or Medicine for development of the radioimmunoassay technique. https://en.wikipedia.org/wiki/Rosalyn_Sussman_Yalow "yalow", // Ada Yonath - an Israeli crystallographer, the first woman from the Middle East to win a Nobel prize in the sciences. https://en.wikipedia.org/wiki/Ada_Yonath "yonath", // Nikolay Yegorovich Zhukovsky (Russian: Никола́й Его́рович Жуко́вский, January 17 1847 – March 17, 1921) was a Russian scientist, mathematician and engineer, and a founding father of modern aero- and hydrodynamics. Whereas contemporary scientists scoffed at the idea of human flight, Zhukovsky was the first to undertake the study of airflow. He is often called the Father of Russian Aviation. https://en.wikipedia.org/wiki/Nikolay_Yegorovich_Zhukovsky "zhukovsky", } ) // GetRandomName generates a random name from the list of adjectives and surnames in this package // formatted as "adjective_surname". For example 'focused_turing'. If retry is non-zero, a random // integer between 0 and 10 will be added to the end of the name, e.g `focused_turing3` func GetRandomName(retry int) string { begin: name := fmt.Sprintf("%s_%s", left[rand.Intn(len(left))], right[rand.Intn(len(right))]) //nolint:gosec // G404: Use of weak random number generator (math/rand instead of crypto/rand) if name == "boring_wozniak" /* Steve Wozniak is not boring */ { goto begin } if retry > 0 { name = fmt.Sprintf("%s%d", name, rand.Intn(10)) //nolint:gosec // G404: Use of weak random number generator (math/rand instead of crypto/rand) } return name }
package namesgenerator // import "github.com/docker/docker/pkg/namesgenerator" import ( "fmt" "math/rand" ) var ( left = [...]string{ "admiring", "adoring", "affectionate", "agitated", "amazing", "angry", "awesome", "beautiful", "blissful", "bold", "boring", "brave", "busy", "charming", "clever", "cool", "compassionate", "competent", "condescending", "confident", "cranky", "crazy", "dazzling", "determined", "distracted", "dreamy", "eager", "ecstatic", "elastic", "elated", "elegant", "eloquent", "epic", "exciting", "fervent", "festive", "flamboyant", "focused", "friendly", "frosty", "funny", "gallant", "gifted", "goofy", "gracious", "great", "happy", "hardcore", "heuristic", "hopeful", "hungry", "infallible", "inspiring", "interesting", "intelligent", "jolly", "jovial", "keen", "kind", "laughing", "loving", "lucid", "magical", "mystifying", "modest", "musing", "naughty", "nervous", "nice", "nifty", "nostalgic", "objective", "optimistic", "peaceful", "pedantic", "pensive", "practical", "priceless", "quirky", "quizzical", "recursing", "relaxed", "reverent", "romantic", "sad", "serene", "sharp", "silly", "sleepy", "stoic", "strange", "stupefied", "suspicious", "sweet", "tender", "thirsty", "trusting", "unruffled", "upbeat", "vibrant", "vigilant", "vigorous", "wizardly", "wonderful", "xenodochial", "youthful", "zealous", "zen", } // Docker, starting from 0.7.x, generates names from notable scientists and hackers. // Please, for any amazing man that you add to the list, consider adding an equally amazing woman to it, and vice versa. right = [...]string{ // Maria Gaetana Agnesi - Italian mathematician, philosopher, theologian and humanitarian. She was the first woman to write a mathematics handbook and the first woman appointed as a Mathematics Professor at a University. https://en.wikipedia.org/wiki/Maria_Gaetana_Agnesi "agnesi", // Muhammad ibn Jābir al-Ḥarrānī al-Battānī was a founding father of astronomy. https://en.wikipedia.org/wiki/Mu%E1%B8%A5ammad_ibn_J%C4%81bir_al-%E1%B8%A4arr%C4%81n%C4%AB_al-Batt%C4%81n%C4%AB "albattani", // Frances E. Allen, became the first female IBM Fellow in 1989. In 2006, she became the first female recipient of the ACM's Turing Award. https://en.wikipedia.org/wiki/Frances_E._Allen "allen", // June Almeida - Scottish virologist who took the first pictures of the rubella virus - https://en.wikipedia.org/wiki/June_Almeida "almeida", // Kathleen Antonelli, American computer programmer and one of the six original programmers of the ENIAC - https://en.wikipedia.org/wiki/Kathleen_Antonelli "antonelli", // Archimedes was a physicist, engineer and mathematician who invented too many things to list them here. https://en.wikipedia.org/wiki/Archimedes "archimedes", // Maria Ardinghelli - Italian translator, mathematician and physicist - https://en.wikipedia.org/wiki/Maria_Ardinghelli "ardinghelli", // Aryabhata - Ancient Indian mathematician-astronomer during 476-550 CE https://en.wikipedia.org/wiki/Aryabhata "aryabhata", // Wanda Austin - Wanda Austin is the President and CEO of The Aerospace Corporation, a leading architect for the US security space programs. https://en.wikipedia.org/wiki/Wanda_Austin "austin", // Charles Babbage invented the concept of a programmable computer. https://en.wikipedia.org/wiki/Charles_Babbage. "babbage", // Stefan Banach - Polish mathematician, was one of the founders of modern functional analysis. https://en.wikipedia.org/wiki/Stefan_Banach "banach", // Buckaroo Banzai and his mentor Dr. Hikita perfected the "oscillation overthruster", a device that allows one to pass through solid matter. - https://en.wikipedia.org/wiki/The_Adventures_of_Buckaroo_Banzai_Across_the_8th_Dimension "banzai", // John Bardeen co-invented the transistor - https://en.wikipedia.org/wiki/John_Bardeen "bardeen", // Jean Bartik, born Betty Jean Jennings, was one of the original programmers for the ENIAC computer. https://en.wikipedia.org/wiki/Jean_Bartik "bartik", // Laura Bassi, the world's first female professor https://en.wikipedia.org/wiki/Laura_Bassi "bassi", // Hugh Beaver, British engineer, founder of the Guinness Book of World Records https://en.wikipedia.org/wiki/Hugh_Beaver "beaver", // Alexander Graham Bell - an eminent Scottish-born scientist, inventor, engineer and innovator who is credited with inventing the first practical telephone - https://en.wikipedia.org/wiki/Alexander_Graham_Bell "bell", // Karl Friedrich Benz - a German automobile engineer. Inventor of the first practical motorcar. https://en.wikipedia.org/wiki/Karl_Benz "benz", // Homi J Bhabha - was an Indian nuclear physicist, founding director, and professor of physics at the Tata Institute of Fundamental Research. Colloquially known as "father of Indian nuclear programme"- https://en.wikipedia.org/wiki/Homi_J._Bhabha "bhabha", // Bhaskara II - Ancient Indian mathematician-astronomer whose work on calculus predates Newton and Leibniz by over half a millennium - https://en.wikipedia.org/wiki/Bh%C4%81skara_II#Calculus "bhaskara", // Sue Black - British computer scientist and campaigner. She has been instrumental in saving Bletchley Park, the site of World War II codebreaking - https://en.wikipedia.org/wiki/Sue_Black_(computer_scientist) "black", // Elizabeth Helen Blackburn - Australian-American Nobel laureate; best known for co-discovering telomerase. https://en.wikipedia.org/wiki/Elizabeth_Blackburn "blackburn", // Elizabeth Blackwell - American doctor and first American woman to receive a medical degree - https://en.wikipedia.org/wiki/Elizabeth_Blackwell "blackwell", // Niels Bohr is the father of quantum theory. https://en.wikipedia.org/wiki/Niels_Bohr. "bohr", // Kathleen Booth, she's credited with writing the first assembly language. https://en.wikipedia.org/wiki/Kathleen_Booth "booth", // Anita Borg - Anita Borg was the founding director of the Institute for Women and Technology (IWT). https://en.wikipedia.org/wiki/Anita_Borg "borg", // Satyendra Nath Bose - He provided the foundation for Bose–Einstein statistics and the theory of the Bose–Einstein condensate. - https://en.wikipedia.org/wiki/Satyendra_Nath_Bose "bose", // Katherine Louise Bouman is an imaging scientist and Assistant Professor of Computer Science at the California Institute of Technology. She researches computational methods for imaging, and developed an algorithm that made possible the picture first visualization of a black hole using the Event Horizon Telescope. - https://en.wikipedia.org/wiki/Katie_Bouman "bouman", // Evelyn Boyd Granville - She was one of the first African-American woman to receive a Ph.D. in mathematics; she earned it in 1949 from Yale University. https://en.wikipedia.org/wiki/Evelyn_Boyd_Granville "boyd", // Brahmagupta - Ancient Indian mathematician during 598-670 CE who gave rules to compute with zero - https://en.wikipedia.org/wiki/Brahmagupta#Zero "brahmagupta", // Walter Houser Brattain co-invented the transistor - https://en.wikipedia.org/wiki/Walter_Houser_Brattain "brattain", // Emmett Brown invented time travel. https://en.wikipedia.org/wiki/Emmett_Brown (thanks Brian Goff) "brown", // Linda Brown Buck - American biologist and Nobel laureate best known for her genetic and molecular analyses of the mechanisms of smell. https://en.wikipedia.org/wiki/Linda_B._Buck "buck", // Dame Susan Jocelyn Bell Burnell - Northern Irish astrophysicist who discovered radio pulsars and was the first to analyse them. https://en.wikipedia.org/wiki/Jocelyn_Bell_Burnell "burnell", // Annie Jump Cannon - pioneering female astronomer who classified hundreds of thousands of stars and created the system we use to understand stars today. https://en.wikipedia.org/wiki/Annie_Jump_Cannon "cannon", // Rachel Carson - American marine biologist and conservationist, her book Silent Spring and other writings are credited with advancing the global environmental movement. https://en.wikipedia.org/wiki/Rachel_Carson "carson", // Dame Mary Lucy Cartwright - British mathematician who was one of the first to study what is now known as chaos theory. Also known for Cartwright's theorem which finds applications in signal processing. https://en.wikipedia.org/wiki/Mary_Cartwright "cartwright", // George Washington Carver - American agricultural scientist and inventor. He was the most prominent black scientist of the early 20th century. https://en.wikipedia.org/wiki/George_Washington_Carver "carver", // Vinton Gray Cerf - American Internet pioneer, recognised as one of "the fathers of the Internet". With Robert Elliot Kahn, he designed TCP and IP, the primary data communication protocols of the Internet and other computer networks. https://en.wikipedia.org/wiki/Vint_Cerf "cerf", // Subrahmanyan Chandrasekhar - Astrophysicist known for his mathematical theory on different stages and evolution in structures of the stars. He has won nobel prize for physics - https://en.wikipedia.org/wiki/Subrahmanyan_Chandrasekhar "chandrasekhar", // Sergey Alexeyevich Chaplygin (Russian: Серге́й Алексе́евич Чаплы́гин; April 5, 1869 – October 8, 1942) was a Russian and Soviet physicist, mathematician, and mechanical engineer. He is known for mathematical formulas such as Chaplygin's equation and for a hypothetical substance in cosmology called Chaplygin gas, named after him. https://en.wikipedia.org/wiki/Sergey_Chaplygin "chaplygin", // Émilie du Châtelet - French natural philosopher, mathematician, physicist, and author during the early 1730s, known for her translation of and commentary on Isaac Newton's book Principia containing basic laws of physics. https://en.wikipedia.org/wiki/%C3%89milie_du_Ch%C3%A2telet "chatelet", // Asima Chatterjee was an Indian organic chemist noted for her research on vinca alkaloids, development of drugs for treatment of epilepsy and malaria - https://en.wikipedia.org/wiki/Asima_Chatterjee "chatterjee", // David Lee Chaum - American computer scientist and cryptographer. Known for his seminal contributions in the field of anonymous communication. https://en.wikipedia.org/wiki/David_Chaum "chaum", // Pafnuty Chebyshev - Russian mathematician. He is known fo his works on probability, statistics, mechanics, analytical geometry and number theory https://en.wikipedia.org/wiki/Pafnuty_Chebyshev "chebyshev", // Joan Clarke - Bletchley Park code breaker during the Second World War who pioneered techniques that remained top secret for decades. Also an accomplished numismatist https://en.wikipedia.org/wiki/Joan_Clarke "clarke", // Bram Cohen - American computer programmer and author of the BitTorrent peer-to-peer protocol. https://en.wikipedia.org/wiki/Bram_Cohen "cohen", // Jane Colden - American botanist widely considered the first female American botanist - https://en.wikipedia.org/wiki/Jane_Colden "colden", // Gerty Theresa Cori - American biochemist who became the third woman—and first American woman—to win a Nobel Prize in science, and the first woman to be awarded the Nobel Prize in Physiology or Medicine. Cori was born in Prague. https://en.wikipedia.org/wiki/Gerty_Cori "cori", // Seymour Roger Cray was an American electrical engineer and supercomputer architect who designed a series of computers that were the fastest in the world for decades. https://en.wikipedia.org/wiki/Seymour_Cray "cray", // This entry reflects a husband and wife team who worked together: // Joan Curran was a Welsh scientist who developed radar and invented chaff, a radar countermeasure. https://en.wikipedia.org/wiki/Joan_Curran // Samuel Curran was an Irish physicist who worked alongside his wife during WWII and invented the proximity fuse. https://en.wikipedia.org/wiki/Samuel_Curran "curran", // Marie Curie discovered radioactivity. https://en.wikipedia.org/wiki/Marie_Curie. "curie", // Charles Darwin established the principles of natural evolution. https://en.wikipedia.org/wiki/Charles_Darwin. "darwin", // Leonardo Da Vinci invented too many things to list here. https://en.wikipedia.org/wiki/Leonardo_da_Vinci. "davinci", // A. K. (Alexander Keewatin) Dewdney, Canadian mathematician, computer scientist, author and filmmaker. Contributor to Scientific American's "Computer Recreations" from 1984 to 1991. Author of Core War (program), The Planiverse, The Armchair Universe, The Magic Machine, The New Turing Omnibus, and more. https://en.wikipedia.org/wiki/Alexander_Dewdney "dewdney", // Satish Dhawan - Indian mathematician and aerospace engineer, known for leading the successful and indigenous development of the Indian space programme. https://en.wikipedia.org/wiki/Satish_Dhawan "dhawan", // Bailey Whitfield Diffie - American cryptographer and one of the pioneers of public-key cryptography. https://en.wikipedia.org/wiki/Whitfield_Diffie "diffie", // Edsger Wybe Dijkstra was a Dutch computer scientist and mathematical scientist. https://en.wikipedia.org/wiki/Edsger_W._Dijkstra. "dijkstra", // Paul Adrien Maurice Dirac - English theoretical physicist who made fundamental contributions to the early development of both quantum mechanics and quantum electrodynamics. https://en.wikipedia.org/wiki/Paul_Dirac "dirac", // Agnes Meyer Driscoll - American cryptanalyst during World Wars I and II who successfully cryptanalysed a number of Japanese ciphers. She was also the co-developer of one of the cipher machines of the US Navy, the CM. https://en.wikipedia.org/wiki/Agnes_Meyer_Driscoll "driscoll", // Donna Dubinsky - played an integral role in the development of personal digital assistants (PDAs) serving as CEO of Palm, Inc. and co-founding Handspring. https://en.wikipedia.org/wiki/Donna_Dubinsky "dubinsky", // Annie Easley - She was a leading member of the team which developed software for the Centaur rocket stage and one of the first African-Americans in her field. https://en.wikipedia.org/wiki/Annie_Easley "easley", // Thomas Alva Edison, prolific inventor https://en.wikipedia.org/wiki/Thomas_Edison "edison", // Albert Einstein invented the general theory of relativity. https://en.wikipedia.org/wiki/Albert_Einstein "einstein", // Alexandra Asanovna Elbakyan (Russian: Алекса́ндра Аса́новна Элбакя́н) is a Kazakhstani graduate student, computer programmer, internet pirate in hiding, and the creator of the site Sci-Hub. Nature has listed her in 2016 in the top ten people that mattered in science, and Ars Technica has compared her to Aaron Swartz. - https://en.wikipedia.org/wiki/Alexandra_Elbakyan "elbakyan", // Taher A. ElGamal - Egyptian cryptographer best known for the ElGamal discrete log cryptosystem and the ElGamal digital signature scheme. https://en.wikipedia.org/wiki/Taher_Elgamal "elgamal", // Gertrude Elion - American biochemist, pharmacologist and the 1988 recipient of the Nobel Prize in Medicine - https://en.wikipedia.org/wiki/Gertrude_Elion "elion", // James Henry Ellis - British engineer and cryptographer employed by the GCHQ. Best known for conceiving for the first time, the idea of public-key cryptography. https://en.wikipedia.org/wiki/James_H._Ellis "ellis", // Douglas Engelbart gave the mother of all demos: https://en.wikipedia.org/wiki/Douglas_Engelbart "engelbart", // Euclid invented geometry. https://en.wikipedia.org/wiki/Euclid "euclid", // Leonhard Euler invented large parts of modern mathematics. https://de.wikipedia.org/wiki/Leonhard_Euler "euler", // Michael Faraday - British scientist who contributed to the study of electromagnetism and electrochemistry. https://en.wikipedia.org/wiki/Michael_Faraday "faraday", // Horst Feistel - German-born American cryptographer who was one of the earliest non-government researchers to study the design and theory of block ciphers. Co-developer of DES and Lucifer. Feistel networks, a symmetric structure used in the construction of block ciphers are named after him. https://en.wikipedia.org/wiki/Horst_Feistel "feistel", // Pierre de Fermat pioneered several aspects of modern mathematics. https://en.wikipedia.org/wiki/Pierre_de_Fermat "fermat", // Enrico Fermi invented the first nuclear reactor. https://en.wikipedia.org/wiki/Enrico_Fermi. "fermi", // Richard Feynman was a key contributor to quantum mechanics and particle physics. https://en.wikipedia.org/wiki/Richard_Feynman "feynman", // Benjamin Franklin is famous for his experiments in electricity and the invention of the lightning rod. "franklin", // Yuri Alekseyevich Gagarin - Soviet pilot and cosmonaut, best known as the first human to journey into outer space. https://en.wikipedia.org/wiki/Yuri_Gagarin "gagarin", // Galileo was a founding father of modern astronomy, and faced politics and obscurantism to establish scientific truth. https://en.wikipedia.org/wiki/Galileo_Galilei "galileo", // Évariste Galois - French mathematician whose work laid the foundations of Galois theory and group theory, two major branches of abstract algebra, and the subfield of Galois connections, all while still in his late teens. https://en.wikipedia.org/wiki/%C3%89variste_Galois "galois", // Kadambini Ganguly - Indian physician, known for being the first South Asian female physician, trained in western medicine, to graduate in South Asia. https://en.wikipedia.org/wiki/Kadambini_Ganguly "ganguly", // William Henry "Bill" Gates III is an American business magnate, philanthropist, investor, computer programmer, and inventor. https://en.wikipedia.org/wiki/Bill_Gates "gates", // Johann Carl Friedrich Gauss - German mathematician who made significant contributions to many fields, including number theory, algebra, statistics, analysis, differential geometry, geodesy, geophysics, mechanics, electrostatics, magnetic fields, astronomy, matrix theory, and optics. https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "gauss", // Marie-Sophie Germain - French mathematician, physicist and philosopher. Known for her work on elasticity theory, number theory and philosophy. https://en.wikipedia.org/wiki/Sophie_Germain "germain", // Adele Goldberg, was one of the designers and developers of the Smalltalk language. https://en.wikipedia.org/wiki/Adele_Goldberg_(computer_scientist) "goldberg", // Adele Goldstine, born Adele Katz, wrote the complete technical description for the first electronic digital computer, ENIAC. https://en.wikipedia.org/wiki/Adele_Goldstine "goldstine", // Shafi Goldwasser is a computer scientist known for creating theoretical foundations of modern cryptography. Winner of 2012 ACM Turing Award. https://en.wikipedia.org/wiki/Shafi_Goldwasser "goldwasser", // James Golick, all around gangster. "golick", // Jane Goodall - British primatologist, ethologist, and anthropologist who is considered to be the world's foremost expert on chimpanzees - https://en.wikipedia.org/wiki/Jane_Goodall "goodall", // Stephen Jay Gould was was an American paleontologist, evolutionary biologist, and historian of science. He is most famous for the theory of punctuated equilibrium - https://en.wikipedia.org/wiki/Stephen_Jay_Gould "gould", // Carolyn Widney Greider - American molecular biologist and joint winner of the 2009 Nobel Prize for Physiology or Medicine for the discovery of telomerase. https://en.wikipedia.org/wiki/Carol_W._Greider "greider", // Alexander Grothendieck - German-born French mathematician who became a leading figure in the creation of modern algebraic geometry. https://en.wikipedia.org/wiki/Alexander_Grothendieck "grothendieck", // Lois Haibt - American computer scientist, part of the team at IBM that developed FORTRAN - https://en.wikipedia.org/wiki/Lois_Haibt "haibt", // Margaret Hamilton - Director of the Software Engineering Division of the MIT Instrumentation Laboratory, which developed on-board flight software for the Apollo space program. https://en.wikipedia.org/wiki/Margaret_Hamilton_(scientist) "hamilton", // Caroline Harriet Haslett - English electrical engineer, electricity industry administrator and champion of women's rights. Co-author of British Standard 1363 that specifies AC power plugs and sockets used across the United Kingdom (which is widely considered as one of the safest designs). https://en.wikipedia.org/wiki/Caroline_Haslett "haslett", // Stephen Hawking pioneered the field of cosmology by combining general relativity and quantum mechanics. https://en.wikipedia.org/wiki/Stephen_Hawking "hawking", // Martin Edward Hellman - American cryptologist, best known for his invention of public-key cryptography in co-operation with Whitfield Diffie and Ralph Merkle. https://en.wikipedia.org/wiki/Martin_Hellman "hellman", // Werner Heisenberg was a founding father of quantum mechanics. https://en.wikipedia.org/wiki/Werner_Heisenberg "heisenberg", // Grete Hermann was a German philosopher noted for her philosophical work on the foundations of quantum mechanics. https://en.wikipedia.org/wiki/Grete_Hermann "hermann", // Caroline Lucretia Herschel - German astronomer and discoverer of several comets. https://en.wikipedia.org/wiki/Caroline_Herschel "herschel", // Heinrich Rudolf Hertz - German physicist who first conclusively proved the existence of the electromagnetic waves. https://en.wikipedia.org/wiki/Heinrich_Hertz "hertz", // Jaroslav Heyrovský was the inventor of the polarographic method, father of the electroanalytical method, and recipient of the Nobel Prize in 1959. His main field of work was polarography. https://en.wikipedia.org/wiki/Jaroslav_Heyrovsk%C3%BD "heyrovsky", // Dorothy Hodgkin was a British biochemist, credited with the development of protein crystallography. She was awarded the Nobel Prize in Chemistry in 1964. https://en.wikipedia.org/wiki/Dorothy_Hodgkin "hodgkin", // Douglas R. Hofstadter is an American professor of cognitive science and author of the Pulitzer Prize and American Book Award-winning work Goedel, Escher, Bach: An Eternal Golden Braid in 1979. A mind-bending work which coined Hofstadter's Law: "It always takes longer than you expect, even when you take into account Hofstadter's Law." https://en.wikipedia.org/wiki/Douglas_Hofstadter "hofstadter", // Erna Schneider Hoover revolutionized modern communication by inventing a computerized telephone switching method. https://en.wikipedia.org/wiki/Erna_Schneider_Hoover "hoover", // Grace Hopper developed the first compiler for a computer programming language and is credited with popularizing the term "debugging" for fixing computer glitches. https://en.wikipedia.org/wiki/Grace_Hopper "hopper", // Frances Hugle, she was an American scientist, engineer, and inventor who contributed to the understanding of semiconductors, integrated circuitry, and the unique electrical principles of microscopic materials. https://en.wikipedia.org/wiki/Frances_Hugle "hugle", // Hypatia - Greek Alexandrine Neoplatonist philosopher in Egypt who was one of the earliest mothers of mathematics - https://en.wikipedia.org/wiki/Hypatia "hypatia", // Teruko Ishizaka - Japanese scientist and immunologist who co-discovered the antibody class Immunoglobulin E. https://en.wikipedia.org/wiki/Teruko_Ishizaka "ishizaka", // Mary Jackson, American mathematician and aerospace engineer who earned the highest title within NASA's engineering department - https://en.wikipedia.org/wiki/Mary_Jackson_(engineer) "jackson", // Yeong-Sil Jang was a Korean scientist and astronomer during the Joseon Dynasty; he invented the first metal printing press and water gauge. https://en.wikipedia.org/wiki/Jang_Yeong-sil "jang", // Mae Carol Jemison - is an American engineer, physician, and former NASA astronaut. She became the first black woman to travel in space when she served as a mission specialist aboard the Space Shuttle Endeavour - https://en.wikipedia.org/wiki/Mae_Jemison "jemison", // Betty Jennings - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Jean_Bartik "jennings", // Mary Lou Jepsen, was the founder and chief technology officer of One Laptop Per Child (OLPC), and the founder of Pixel Qi. https://en.wikipedia.org/wiki/Mary_Lou_Jepsen "jepsen", // Katherine Coleman Goble Johnson - American physicist and mathematician contributed to the NASA. https://en.wikipedia.org/wiki/Katherine_Johnson "johnson", // Irène Joliot-Curie - French scientist who was awarded the Nobel Prize for Chemistry in 1935. Daughter of Marie and Pierre Curie. https://en.wikipedia.org/wiki/Ir%C3%A8ne_Joliot-Curie "joliot", // Karen Spärck Jones came up with the concept of inverse document frequency, which is used in most search engines today. https://en.wikipedia.org/wiki/Karen_Sp%C3%A4rck_Jones "jones", // A. P. J. Abdul Kalam - is an Indian scientist aka Missile Man of India for his work on the development of ballistic missile and launch vehicle technology - https://en.wikipedia.org/wiki/A._P._J._Abdul_Kalam "kalam", // Sergey Petrovich Kapitsa (Russian: Серге́й Петро́вич Капи́ца; 14 February 1928 – 14 August 2012) was a Russian physicist and demographer. He was best known as host of the popular and long-running Russian scientific TV show, Evident, but Incredible. His father was the Nobel laureate Soviet-era physicist Pyotr Kapitsa, and his brother was the geographer and Antarctic explorer Andrey Kapitsa. - https://en.wikipedia.org/wiki/Sergey_Kapitsa "kapitsa", // Susan Kare, created the icons and many of the interface elements for the original Apple Macintosh in the 1980s, and was an original employee of NeXT, working as the Creative Director. https://en.wikipedia.org/wiki/Susan_Kare "kare", // Mstislav Keldysh - a Soviet scientist in the field of mathematics and mechanics, academician of the USSR Academy of Sciences (1946), President of the USSR Academy of Sciences (1961–1975), three times Hero of Socialist Labor (1956, 1961, 1971), fellow of the Royal Society of Edinburgh (1968). https://en.wikipedia.org/wiki/Mstislav_Keldysh "keldysh", // Mary Kenneth Keller, Sister Mary Kenneth Keller became the first American woman to earn a PhD in Computer Science in 1965. https://en.wikipedia.org/wiki/Mary_Kenneth_Keller "keller", // Johannes Kepler, German astronomer known for his three laws of planetary motion - https://en.wikipedia.org/wiki/Johannes_Kepler "kepler", // Omar Khayyam - Persian mathematician, astronomer and poet. Known for his work on the classification and solution of cubic equations, for his contribution to the understanding of Euclid's fifth postulate and for computing the length of a year very accurately. https://en.wikipedia.org/wiki/Omar_Khayyam "khayyam", // Har Gobind Khorana - Indian-American biochemist who shared the 1968 Nobel Prize for Physiology - https://en.wikipedia.org/wiki/Har_Gobind_Khorana "khorana", // Jack Kilby invented silicon integrated circuits and gave Silicon Valley its name. - https://en.wikipedia.org/wiki/Jack_Kilby "kilby", // Maria Kirch - German astronomer and first woman to discover a comet - https://en.wikipedia.org/wiki/Maria_Margarethe_Kirch "kirch", // Donald Knuth - American computer scientist, author of "The Art of Computer Programming" and creator of the TeX typesetting system. https://en.wikipedia.org/wiki/Donald_Knuth "knuth", // Sophie Kowalevski - Russian mathematician responsible for important original contributions to analysis, differential equations and mechanics - https://en.wikipedia.org/wiki/Sofia_Kovalevskaya "kowalevski", // Marie-Jeanne de Lalande - French astronomer, mathematician and cataloguer of stars - https://en.wikipedia.org/wiki/Marie-Jeanne_de_Lalande "lalande", // Hedy Lamarr - Actress and inventor. The principles of her work are now incorporated into modern Wi-Fi, CDMA and Bluetooth technology. https://en.wikipedia.org/wiki/Hedy_Lamarr "lamarr", // Leslie B. Lamport - American computer scientist. Lamport is best known for his seminal work in distributed systems and was the winner of the 2013 Turing Award. https://en.wikipedia.org/wiki/Leslie_Lamport "lamport", // Mary Leakey - British paleoanthropologist who discovered the first fossilized Proconsul skull - https://en.wikipedia.org/wiki/Mary_Leakey "leakey", // Henrietta Swan Leavitt - she was an American astronomer who discovered the relation between the luminosity and the period of Cepheid variable stars. https://en.wikipedia.org/wiki/Henrietta_Swan_Leavitt "leavitt", // Esther Miriam Zimmer Lederberg - American microbiologist and a pioneer of bacterial genetics. https://en.wikipedia.org/wiki/Esther_Lederberg "lederberg", // Inge Lehmann - Danish seismologist and geophysicist. Known for discovering in 1936 that the Earth has a solid inner core inside a molten outer core. https://en.wikipedia.org/wiki/Inge_Lehmann "lehmann", // Daniel Lewin - Mathematician, Akamai co-founder, soldier, 9/11 victim-- Developed optimization techniques for routing traffic on the internet. Died attempting to stop the 9-11 hijackers. https://en.wikipedia.org/wiki/Daniel_Lewin "lewin", // Ruth Lichterman - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Ruth_Teitelbaum "lichterman", // Barbara Liskov - co-developed the Liskov substitution principle. Liskov was also the winner of the Turing Prize in 2008. - https://en.wikipedia.org/wiki/Barbara_Liskov "liskov", // Ada Lovelace invented the first algorithm. https://en.wikipedia.org/wiki/Ada_Lovelace (thanks James Turnbull) "lovelace", // Auguste and Louis Lumière - the first filmmakers in history - https://en.wikipedia.org/wiki/Auguste_and_Louis_Lumi%C3%A8re "lumiere", // Mahavira - Ancient Indian mathematician during 9th century AD who discovered basic algebraic identities - https://en.wikipedia.org/wiki/Mah%C4%81v%C4%ABra_(mathematician) "mahavira", // Lynn Margulis (b. Lynn Petra Alexander) - an American evolutionary theorist and biologist, science author, educator, and popularizer, and was the primary modern proponent for the significance of symbiosis in evolution. - https://en.wikipedia.org/wiki/Lynn_Margulis "margulis", // Yukihiro Matsumoto - Japanese computer scientist and software programmer best known as the chief designer of the Ruby programming language. https://en.wikipedia.org/wiki/Yukihiro_Matsumoto "matsumoto", // James Clerk Maxwell - Scottish physicist, best known for his formulation of electromagnetic theory. https://en.wikipedia.org/wiki/James_Clerk_Maxwell "maxwell", // Maria Mayer - American theoretical physicist and Nobel laureate in Physics for proposing the nuclear shell model of the atomic nucleus - https://en.wikipedia.org/wiki/Maria_Mayer "mayer", // John McCarthy invented LISP: https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist) "mccarthy", // Barbara McClintock - a distinguished American cytogeneticist, 1983 Nobel Laureate in Physiology or Medicine for discovering transposons. https://en.wikipedia.org/wiki/Barbara_McClintock "mcclintock", // Anne Laura Dorinthea McLaren - British developmental biologist whose work helped lead to human in-vitro fertilisation. https://en.wikipedia.org/wiki/Anne_McLaren "mclaren", // Malcolm McLean invented the modern shipping container: https://en.wikipedia.org/wiki/Malcom_McLean "mclean", // Kay McNulty - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Kathleen_Antonelli "mcnulty", // Gregor Johann Mendel - Czech scientist and founder of genetics. https://en.wikipedia.org/wiki/Gregor_Mendel "mendel", // Dmitri Mendeleev - a chemist and inventor. He formulated the Periodic Law, created a farsighted version of the periodic table of elements, and used it to correct the properties of some already discovered elements and also to predict the properties of eight elements yet to be discovered. https://en.wikipedia.org/wiki/Dmitri_Mendeleev "mendeleev", // Lise Meitner - Austrian/Swedish physicist who was involved in the discovery of nuclear fission. The element meitnerium is named after her - https://en.wikipedia.org/wiki/Lise_Meitner "meitner", // Carla Meninsky, was the game designer and programmer for Atari 2600 games Dodge 'Em and Warlords. https://en.wikipedia.org/wiki/Carla_Meninsky "meninsky", // Ralph C. Merkle - American computer scientist, known for devising Merkle's puzzles - one of the very first schemes for public-key cryptography. Also, inventor of Merkle trees and co-inventor of the Merkle-Damgård construction for building collision-resistant cryptographic hash functions and the Merkle-Hellman knapsack cryptosystem. https://en.wikipedia.org/wiki/Ralph_Merkle "merkle", // Johanna Mestorf - German prehistoric archaeologist and first female museum director in Germany - https://en.wikipedia.org/wiki/Johanna_Mestorf "mestorf", // Maryam Mirzakhani - an Iranian mathematician and the first woman to win the Fields Medal. https://en.wikipedia.org/wiki/Maryam_Mirzakhani "mirzakhani", // Rita Levi-Montalcini - Won Nobel Prize in Physiology or Medicine jointly with colleague Stanley Cohen for the discovery of nerve growth factor (https://en.wikipedia.org/wiki/Rita_Levi-Montalcini) "montalcini", // Gordon Earle Moore - American engineer, Silicon Valley founding father, author of Moore's law. https://en.wikipedia.org/wiki/Gordon_Moore "moore", // Samuel Morse - contributed to the invention of a single-wire telegraph system based on European telegraphs and was a co-developer of the Morse code - https://en.wikipedia.org/wiki/Samuel_Morse "morse", // Ian Murdock - founder of the Debian project - https://en.wikipedia.org/wiki/Ian_Murdock "murdock", // May-Britt Moser - Nobel prize winner neuroscientist who contributed to the discovery of grid cells in the brain. https://en.wikipedia.org/wiki/May-Britt_Moser "moser", // John Napier of Merchiston - Scottish landowner known as an astronomer, mathematician and physicist. Best known for his discovery of logarithms. https://en.wikipedia.org/wiki/John_Napier "napier", // John Forbes Nash, Jr. - American mathematician who made fundamental contributions to game theory, differential geometry, and the study of partial differential equations. https://en.wikipedia.org/wiki/John_Forbes_Nash_Jr. "nash", // John von Neumann - todays computer architectures are based on the von Neumann architecture. https://en.wikipedia.org/wiki/Von_Neumann_architecture "neumann", // Isaac Newton invented classic mechanics and modern optics. https://en.wikipedia.org/wiki/Isaac_Newton "newton", // Florence Nightingale, more prominently known as a nurse, was also the first female member of the Royal Statistical Society and a pioneer in statistical graphics https://en.wikipedia.org/wiki/Florence_Nightingale#Statistics_and_sanitary_reform "nightingale", // Alfred Nobel - a Swedish chemist, engineer, innovator, and armaments manufacturer (inventor of dynamite) - https://en.wikipedia.org/wiki/Alfred_Nobel "nobel", // Emmy Noether, German mathematician. Noether's Theorem is named after her. https://en.wikipedia.org/wiki/Emmy_Noether "noether", // Poppy Northcutt. Poppy Northcutt was the first woman to work as part of NASA’s Mission Control. http://www.businessinsider.com/poppy-northcutt-helped-apollo-astronauts-2014-12?op=1 "northcutt", // Robert Noyce invented silicon integrated circuits and gave Silicon Valley its name. - https://en.wikipedia.org/wiki/Robert_Noyce "noyce", // Panini - Ancient Indian linguist and grammarian from 4th century CE who worked on the world's first formal system - https://en.wikipedia.org/wiki/P%C4%81%E1%B9%87ini#Comparison_with_modern_formal_systems "panini", // Ambroise Pare invented modern surgery. https://en.wikipedia.org/wiki/Ambroise_Par%C3%A9 "pare", // Blaise Pascal, French mathematician, physicist, and inventor - https://en.wikipedia.org/wiki/Blaise_Pascal "pascal", // Louis Pasteur discovered vaccination, fermentation and pasteurization. https://en.wikipedia.org/wiki/Louis_Pasteur. "pasteur", // Cecilia Payne-Gaposchkin was an astronomer and astrophysicist who, in 1925, proposed in her Ph.D. thesis an explanation for the composition of stars in terms of the relative abundances of hydrogen and helium. https://en.wikipedia.org/wiki/Cecilia_Payne-Gaposchkin "payne", // Radia Perlman is a software designer and network engineer and most famous for her invention of the spanning-tree protocol (STP). https://en.wikipedia.org/wiki/Radia_Perlman "perlman", // Rob Pike was a key contributor to Unix, Plan 9, the X graphic system, utf-8, and the Go programming language. https://en.wikipedia.org/wiki/Rob_Pike "pike", // Henri Poincaré made fundamental contributions in several fields of mathematics. https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9 "poincare", // Laura Poitras is a director and producer whose work, made possible by open source crypto tools, advances the causes of truth and freedom of information by reporting disclosures by whistleblowers such as Edward Snowden. https://en.wikipedia.org/wiki/Laura_Poitras "poitras", // Tat’yana Avenirovna Proskuriakova (Russian: Татья́на Авени́ровна Проскуряко́ва) (January 23 [O.S. January 10] 1909 – August 30, 1985) was a Russian-American Mayanist scholar and archaeologist who contributed significantly to the deciphering of Maya hieroglyphs, the writing system of the pre-Columbian Maya civilization of Mesoamerica. https://en.wikipedia.org/wiki/Tatiana_Proskouriakoff "proskuriakova", // Claudius Ptolemy - a Greco-Egyptian writer of Alexandria, known as a mathematician, astronomer, geographer, astrologer, and poet of a single epigram in the Greek Anthology - https://en.wikipedia.org/wiki/Ptolemy "ptolemy", // C. V. Raman - Indian physicist who won the Nobel Prize in 1930 for proposing the Raman effect. - https://en.wikipedia.org/wiki/C._V._Raman "raman", // Srinivasa Ramanujan - Indian mathematician and autodidact who made extraordinary contributions to mathematical analysis, number theory, infinite series, and continued fractions. - https://en.wikipedia.org/wiki/Srinivasa_Ramanujan "ramanujan", // Sally Kristen Ride was an American physicist and astronaut. She was the first American woman in space, and the youngest American astronaut. https://en.wikipedia.org/wiki/Sally_Ride "ride", // Dennis Ritchie - co-creator of UNIX and the C programming language. - https://en.wikipedia.org/wiki/Dennis_Ritchie "ritchie", // Ida Rhodes - American pioneer in computer programming, designed the first computer used for Social Security. https://en.wikipedia.org/wiki/Ida_Rhodes "rhodes", // Julia Hall Bowman Robinson - American mathematician renowned for her contributions to the fields of computability theory and computational complexity theory. https://en.wikipedia.org/wiki/Julia_Robinson "robinson", // Wilhelm Conrad Röntgen - German physicist who was awarded the first Nobel Prize in Physics in 1901 for the discovery of X-rays (Röntgen rays). https://en.wikipedia.org/wiki/Wilhelm_R%C3%B6ntgen "roentgen", // Rosalind Franklin - British biophysicist and X-ray crystallographer whose research was critical to the understanding of DNA - https://en.wikipedia.org/wiki/Rosalind_Franklin "rosalind", // Vera Rubin - American astronomer who pioneered work on galaxy rotation rates. https://en.wikipedia.org/wiki/Vera_Rubin "rubin", // Meghnad Saha - Indian astrophysicist best known for his development of the Saha equation, used to describe chemical and physical conditions in stars - https://en.wikipedia.org/wiki/Meghnad_Saha "saha", // Jean E. Sammet developed FORMAC, the first widely used computer language for symbolic manipulation of mathematical formulas. https://en.wikipedia.org/wiki/Jean_E._Sammet "sammet", // Mildred Sanderson - American mathematician best known for Sanderson's theorem concerning modular invariants. https://en.wikipedia.org/wiki/Mildred_Sanderson "sanderson", // Satoshi Nakamoto is the name used by the unknown person or group of people who developed bitcoin, authored the bitcoin white paper, and created and deployed bitcoin's original reference implementation. https://en.wikipedia.org/wiki/Satoshi_Nakamoto "satoshi", // Adi Shamir - Israeli cryptographer whose numerous inventions and contributions to cryptography include the Ferge Fiat Shamir identification scheme, the Rivest Shamir Adleman (RSA) public-key cryptosystem, the Shamir's secret sharing scheme, the breaking of the Merkle-Hellman cryptosystem, the TWINKLE and TWIRL factoring devices and the discovery of differential cryptanalysis (with Eli Biham). https://en.wikipedia.org/wiki/Adi_Shamir "shamir", // Claude Shannon - The father of information theory and founder of digital circuit design theory. (https://en.wikipedia.org/wiki/Claude_Shannon) "shannon", // Carol Shaw - Originally an Atari employee, Carol Shaw is said to be the first female video game designer. https://en.wikipedia.org/wiki/Carol_Shaw_(video_game_designer) "shaw", // Dame Stephanie "Steve" Shirley - Founded a software company in 1962 employing women working from home. https://en.wikipedia.org/wiki/Steve_Shirley "shirley", // William Shockley co-invented the transistor - https://en.wikipedia.org/wiki/William_Shockley "shockley", // Lina Solomonovna Stern (or Shtern; Russian: Лина Соломоновна Штерн; 26 August 1878 – 7 March 1968) was a Soviet biochemist, physiologist and humanist whose medical discoveries saved thousands of lives at the fronts of World War II. She is best known for her pioneering work on blood–brain barrier, which she described as hemato-encephalic barrier in 1921. https://en.wikipedia.org/wiki/Lina_Stern "shtern", // Françoise Barré-Sinoussi - French virologist and Nobel Prize Laureate in Physiology or Medicine; her work was fundamental in identifying HIV as the cause of AIDS. https://en.wikipedia.org/wiki/Fran%C3%A7oise_Barr%C3%A9-Sinoussi "sinoussi", // Betty Snyder - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Betty_Holberton "snyder", // Cynthia Solomon - Pioneer in the fields of artificial intelligence, computer science and educational computing. Known for creation of Logo, an educational programming language. https://en.wikipedia.org/wiki/Cynthia_Solomon "solomon", // Frances Spence - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Frances_Spence "spence", // Michael Stonebraker is a database research pioneer and architect of Ingres, Postgres, VoltDB and SciDB. Winner of 2014 ACM Turing Award. https://en.wikipedia.org/wiki/Michael_Stonebraker "stonebraker", // Ivan Edward Sutherland - American computer scientist and Internet pioneer, widely regarded as the father of computer graphics. https://en.wikipedia.org/wiki/Ivan_Sutherland "sutherland", // Janese Swanson (with others) developed the first of the Carmen Sandiego games. She went on to found Girl Tech. https://en.wikipedia.org/wiki/Janese_Swanson "swanson", // Aaron Swartz was influential in creating RSS, Markdown, Creative Commons, Reddit, and much of the internet as we know it today. He was devoted to freedom of information on the web. https://en.wikiquote.org/wiki/Aaron_Swartz "swartz", // Bertha Swirles was a theoretical physicist who made a number of contributions to early quantum theory. https://en.wikipedia.org/wiki/Bertha_Swirles "swirles", // Helen Brooke Taussig - American cardiologist and founder of the field of paediatric cardiology. https://en.wikipedia.org/wiki/Helen_B._Taussig "taussig", // Valentina Tereshkova is a Russian engineer, cosmonaut and politician. She was the first woman to fly to space in 1963. In 2013, at the age of 76, she offered to go on a one-way mission to Mars. https://en.wikipedia.org/wiki/Valentina_Tereshkova "tereshkova", // Nikola Tesla invented the AC electric system and every gadget ever used by a James Bond villain. https://en.wikipedia.org/wiki/Nikola_Tesla "tesla", // Marie Tharp - American geologist and oceanic cartographer who co-created the first scientific map of the Atlantic Ocean floor. Her work led to the acceptance of the theories of plate tectonics and continental drift. https://en.wikipedia.org/wiki/Marie_Tharp "tharp", // Ken Thompson - co-creator of UNIX and the C programming language - https://en.wikipedia.org/wiki/Ken_Thompson "thompson", // Linus Torvalds invented Linux and Git. https://en.wikipedia.org/wiki/Linus_Torvalds "torvalds", // Youyou Tu - Chinese pharmaceutical chemist and educator known for discovering artemisinin and dihydroartemisinin, used to treat malaria, which has saved millions of lives. Joint winner of the 2015 Nobel Prize in Physiology or Medicine. https://en.wikipedia.org/wiki/Tu_Youyou "tu", // Alan Turing was a founding father of computer science. https://en.wikipedia.org/wiki/Alan_Turing. "turing", // Varahamihira - Ancient Indian mathematician who discovered trigonometric formulae during 505-587 CE - https://en.wikipedia.org/wiki/Var%C4%81hamihira#Contributions "varahamihira", // Dorothy Vaughan was a NASA mathematician and computer programmer on the SCOUT launch vehicle program that put America's first satellites into space - https://en.wikipedia.org/wiki/Dorothy_Vaughan "vaughan", // Cédric Villani - French mathematician, won Fields Medal, Fermat Prize and Poincaré Price for his work in differential geometry and statistical mechanics. https://en.wikipedia.org/wiki/C%C3%A9dric_Villani "villani", // Sir Mokshagundam Visvesvaraya - is a notable Indian engineer. He is a recipient of the Indian Republic's highest honour, the Bharat Ratna, in 1955. On his birthday, 15 September is celebrated as Engineer's Day in India in his memory - https://en.wikipedia.org/wiki/Visvesvaraya "visvesvaraya", // Christiane Nüsslein-Volhard - German biologist, won Nobel Prize in Physiology or Medicine in 1995 for research on the genetic control of embryonic development. https://en.wikipedia.org/wiki/Christiane_N%C3%BCsslein-Volhard "volhard", // Marlyn Wescoff - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Marlyn_Meltzer "wescoff", // Sylvia B. Wilbur - British computer scientist who helped develop the ARPANET, was one of the first to exchange email in the UK and a leading researcher in computer-supported collaborative work. https://en.wikipedia.org/wiki/Sylvia_Wilbur "wilbur", // Andrew Wiles - Notable British mathematician who proved the enigmatic Fermat's Last Theorem - https://en.wikipedia.org/wiki/Andrew_Wiles "wiles", // Roberta Williams, did pioneering work in graphical adventure games for personal computers, particularly the King's Quest series. https://en.wikipedia.org/wiki/Roberta_Williams "williams", // Malcolm John Williamson - British mathematician and cryptographer employed by the GCHQ. Developed in 1974 what is now known as Diffie-Hellman key exchange (Diffie and Hellman first published the scheme in 1976). https://en.wikipedia.org/wiki/Malcolm_J._Williamson "williamson", // Sophie Wilson designed the first Acorn Micro-Computer and the instruction set for ARM processors. https://en.wikipedia.org/wiki/Sophie_Wilson "wilson", // Jeannette Wing - co-developed the Liskov substitution principle. - https://en.wikipedia.org/wiki/Jeannette_Wing "wing", // Steve Wozniak invented the Apple I and Apple II. https://en.wikipedia.org/wiki/Steve_Wozniak "wozniak", // The Wright brothers, Orville and Wilbur - credited with inventing and building the world's first successful airplane and making the first controlled, powered and sustained heavier-than-air human flight - https://en.wikipedia.org/wiki/Wright_brothers "wright", // Chien-Shiung Wu - Chinese-American experimental physicist who made significant contributions to nuclear physics. https://en.wikipedia.org/wiki/Chien-Shiung_Wu "wu", // Rosalyn Sussman Yalow - Rosalyn Sussman Yalow was an American medical physicist, and a co-winner of the 1977 Nobel Prize in Physiology or Medicine for development of the radioimmunoassay technique. https://en.wikipedia.org/wiki/Rosalyn_Sussman_Yalow "yalow", // Ada Yonath - an Israeli crystallographer, the first woman from the Middle East to win a Nobel prize in the sciences. https://en.wikipedia.org/wiki/Ada_Yonath "yonath", // Nikolay Yegorovich Zhukovsky (Russian: Никола́й Его́рович Жуко́вский, January 17 1847 – March 17, 1921) was a Russian scientist, mathematician and engineer, and a founding father of modern aero- and hydrodynamics. Whereas contemporary scientists scoffed at the idea of human flight, Zhukovsky was the first to undertake the study of airflow. He is often called the Father of Russian Aviation. https://en.wikipedia.org/wiki/Nikolay_Yegorovich_Zhukovsky "zhukovsky", } ) // GetRandomName generates a random name from the list of adjectives and surnames in this package // formatted as "adjective_surname". For example 'focused_turing'. If retry is non-zero, a random // integer between 0 and 10 will be added to the end of the name, e.g `focused_turing3` func GetRandomName(retry int) string { begin: name := fmt.Sprintf("%s_%s", left[rand.Intn(len(left))], right[rand.Intn(len(right))]) //nolint:gosec // G404: Use of weak random number generator (math/rand instead of crypto/rand) if name == "boring_wozniak" /* Steve Wozniak is not boring */ { goto begin } if retry > 0 { name = fmt.Sprintf("%s%d", name, rand.Intn(10)) //nolint:gosec // G404: Use of weak random number generator (math/rand instead of crypto/rand) } return name }
jk-vb
40502f49f66742a604bb7c24581e0e320db08622
b2e31eb416fe32e125eb7f527818d75c2112089e
I think `gofmt` doesn't like the extra empty line; ``` pkg/namesgenerator/names-generator.go:125: File is not `goimports`-ed (goimports) ``` Let me run `gofmt` and push
thaJeztah
4,568
moby/moby
42,608
Deprecate `BuilderSize` in API versions >= 1.42
Refs https://github.com/moby/moby/pull/42605#discussion_r666065924 <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Ensure `BuilderSize` is not set for API versions >= 1.42 **- How I did it** - Add a version check in the router - Add `omitempty` JSON tag - Document in version history doc **- How to verify it** ``` curl -s --unix-socket /var/run/docker.sock http://localhost/system/df | jq ``` Should not show `BuilderSize` field **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-08 11:33:49+00:00
2021-07-12 17:29:22+00:00
docs/api/version-history.md
--- title: "Engine API version history" description: "Documentation of changes that have been made to Engine API." keywords: "API, Docker, rcli, REST, documentation" --- <!-- This file is maintained within the moby/moby GitHub repository at https://github.com/moby/moby/. Make all pull requests against that repo. If you see this file in another repository, consider it read-only there, as it will periodically be overwritten by the definitive file. Pull requests which include edits to this file in other repositories will be rejected. --> ## v1.42 API changes [Docker Engine API v1.42](https://docs.docker.com/engine/api/v1.42/) documentation ## v1.41 API changes [Docker Engine API v1.41](https://docs.docker.com/engine/api/v1.41/) documentation * `GET /events` now returns `prune` events after pruning resources have completed. Prune events are returned for `container`, `network`, `volume`, `image`, and `builder`, and have a `reclaimed` attribute, indicating the amount of space reclaimed (in bytes). * `GET /info` now returns a `CgroupVersion` field, containing the cgroup version. * `GET /info` now returns a `DefaultAddressPools` field, containing a list of custom default address pools for local networks, which can be specified in the `daemon.json` file or `--default-address-pool` dockerd option. * `POST /services/create` and `POST /services/{id}/update` now supports `BindOptions.NonRecursive`. * The `ClusterStore` and `ClusterAdvertise` fields in `GET /info` are deprecated and are now omitted if they contain an empty value. This change is not versioned, and affects all API versions if the daemon has this patch. * The `filter` (singular) query parameter, which was deprecated in favor of the `filters` option in Docker 1.13, has now been removed from the `GET /images/json` endpoint. The parameter remains available when using API version 1.40 or below. * `GET /services` now returns `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `GET /services/{id}` now returns `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `POST /services/create` now accepts `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `POST /services/{id}/update` now accepts `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `GET /tasks` now returns `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `GET /tasks/{id}` now returns `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `GET /services` now returns `Pids` in `TaskTemplate.Resources.Limits`. * `GET /services/{id}` now returns `Pids` in `TaskTemplate.Resources.Limits`. * `POST /services/create` now accepts `Pids` in `TaskTemplate.Resources.Limits`. * `POST /services/{id}/update` now accepts `Pids` in `TaskTemplate.Resources.Limits` to limit the maximum number of PIDs. * `GET /tasks` now returns `Pids` in `TaskTemplate.Resources.Limits`. * `GET /tasks/{id}` now returns `Pids` in `TaskTemplate.Resources.Limits`. * `POST /containers/create` on Linux now accepts the `HostConfig.CgroupnsMode` property. Set the property to `host` to create the container in the daemon's cgroup namespace, or `private` to create the container in its own private cgroup namespace. The per-daemon default is `host`, and can be changed by using the`CgroupNamespaceMode` daemon configuration parameter. * `GET /info` now returns an `OSVersion` field, containing the operating system's version. This change is not versioned, and affects all API versions if the daemon has this patch. * `GET /info` no longer returns the `SystemStatus` field if it does not have a value set. This change is not versioned, and affects all API versions if the daemon has this patch. * `GET /services` now accepts query parameter `status`. When set `true`, services returned will include `ServiceStatus`, which provides Desired, Running, and Completed task counts for the service. * `GET /services` may now include `ReplicatedJob` or `GlobalJob` as the `Mode` in a `ServiceSpec`. * `GET /services/{id}` may now include `ReplicatedJob` or `GlobalJob` as the `Mode` in a `ServiceSpec`. * `POST /services/create` now accepts `ReplicatedJob or `GlobalJob` as the `Mode` in the `ServiceSpec. * `POST /services/{id}/update` accepts updating the fields of the `ReplicatedJob` object in the `ServiceSpec.Mode`. The service mode still cannot be changed, however. * `GET /services` now includes `JobStatus` on Services with mode `ReplicatedJob` or `GlobalJob`. * `GET /services/{id}` now includes `JobStatus` on Services with mode `ReplicatedJob` or `GlobalJob`. * `GET /tasks` now includes `JobIteration` on Tasks spawned from a job-mode service. * `GET /tasks/{id}` now includes `JobIteration` on the task if spawned from a job-mode service. * `GET /containers/{id}/stats` now accepts a query param (`one-shot`) which, when used with `stream=false` fetches a single set of stats instead of waiting for two collection cycles to have 2 CPU stats over a 1 second period. * The `KernelMemory` field in `HostConfig.Resources` is now deprecated. * The `KernelMemory` field in `Info` is now deprecated. * `GET /services` now returns `Ulimits` as part of `ContainerSpec`. * `GET /services/{id}` now returns `Ulimits` as part of `ContainerSpec`. * `POST /services/create` now accepts `Ulimits` as part of `ContainerSpec`. * `POST /services/{id}/update` now accepts `Ulimits` as part of `ContainerSpec`. ## v1.40 API changes [Docker Engine API v1.40](https://docs.docker.com/engine/api/v1.40/) documentation * The `/_ping` endpoint can now be accessed both using `GET` or `HEAD` requests. when accessed using a `HEAD` request, all headers are returned, but the body is empty (`Content-Length: 0`). This change is not versioned, and affects all API versions if the daemon has this patch. Clients are recommended to try using `HEAD`, but fallback to `GET` if the `HEAD` requests fails. * `GET /_ping` and `HEAD /_ping` now set `Cache-Control` and `Pragma` headers to prevent the result from being cached. This change is not versioned, and affects all API versions if the daemon has this patch. * `GET /services` now returns `Sysctls` as part of the `ContainerSpec`. * `GET /services/{id}` now returns `Sysctls` as part of the `ContainerSpec`. * `POST /services/create` now accepts `Sysctls` as part of the `ContainerSpec`. * `POST /services/{id}/update` now accepts `Sysctls` as part of the `ContainerSpec`. * `POST /services/create` now accepts `Config` as part of `ContainerSpec.Privileges.CredentialSpec`. * `POST /services/{id}/update` now accepts `Config` as part of `ContainerSpec.Privileges.CredentialSpec`. * `POST /services/create` now includes `Runtime` as an option in `ContainerSpec.Configs` * `POST /services/{id}/update` now includes `Runtime` as an option in `ContainerSpec.Configs` * `GET /tasks` now returns `Sysctls` as part of the `ContainerSpec`. * `GET /tasks/{id}` now returns `Sysctls` as part of the `ContainerSpec`. * `GET /networks` now supports a `dangling` filter type. When set to `true` (or `1`), the endpoint returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. * `GET /nodes` now supports a filter type `node.label` filter to filter nodes based on the node.label. The format of the label filter is `node.label=<key>`/`node.label=<key>=<value>` to return those with the specified labels, or `node.label!=<key>`/`node.label!=<key>=<value>` to return those without the specified labels. * `POST /containers/create` now accepts a `fluentd-async` option in `HostConfig.LogConfig.Config` when using the Fluentd logging driver. This option deprecates the `fluentd-async-connect` option, which remains funtional, but will be removed in a future release. Users are encouraged to use the `fluentd-async` option going forward. This change is not versioned, and affects all API versions if the daemon has this patch. * `POST /containers/create` now accepts a `fluentd-request-ack` option in `HostConfig.LogConfig.Config` when using the Fluentd logging driver. If enabled, the Fluentd logging driver sends the chunk option with a unique ID. The server will respond with an acknowledgement. This option improves the reliability of the message transmission. This change is not versioned, and affects all API versions if the daemon has this patch. * `POST /containers/create`, `GET /containers/{id}/json`, and `GET /containers/json` now supports `BindOptions.NonRecursive`. * `POST /swarm/init` now accepts a `DataPathPort` property to set data path port number. * `GET /info` now returns information about `DataPathPort` that is currently used in swarm * `GET /info` now returns `PidsLimit` boolean to indicate if the host kernel has PID limit support enabled. * `GET /info` now includes `name=rootless` in `SecurityOptions` when the daemon is running in rootless mode. This change is not versioned, and affects all API versions if the daemon has this patch. * `GET /info` now returns `none` as `CgroupDriver` when the daemon is running in rootless mode. This change is not versioned, and affects all API versions if the daemon has this patch. * `POST /containers/create` now accepts `DeviceRequests` as part of `HostConfig`. Can be used to set Nvidia GPUs. * `GET /swarm` endpoint now returns DataPathPort info * `POST /containers/create` now takes `KernelMemoryTCP` field to set hard limit for kernel TCP buffer memory. * `GET /service` now returns `MaxReplicas` as part of the `Placement`. * `GET /service/{id}` now returns `MaxReplicas` as part of the `Placement`. * `POST /service/create` and `POST /services/(id or name)/update` now take the field `MaxReplicas` as part of the service `Placement`, allowing to specify maximum replicas per node for the service. * `POST /containers/create` on Linux now creates a container with `HostConfig.IpcMode=private` by default, if IpcMode is not explicitly specified. The per-daemon default can be changed back to `shareable` by using `DefaultIpcMode` daemon configuration parameter. * `POST /containers/{id}/update` now accepts a `PidsLimit` field to tune a container's PID limit. Set `0` or `-1` for unlimited. Leave `null` to not change the current value. * `POST /build` now accepts `outputs` key for configuring build outputs when using BuildKit mode. ## V1.39 API changes [Docker Engine API v1.39](https://docs.docker.com/engine/api/v1.39/) documentation * `GET /info` now returns an empty string, instead of `<unknown>` for `KernelVersion` and `OperatingSystem` if the daemon was unable to obtain this information. * `GET /info` now returns information about the product license, if a license has been applied to the daemon. * `GET /info` now returns a `Warnings` field, containing warnings and informational messages about missing features, or issues related to the daemon configuration. * `POST /swarm/init` now accepts a `DefaultAddrPool` property to set global scope default address pool * `POST /swarm/init` now accepts a `SubnetSize` property to set global scope networks by giving the length of the subnet masks for every such network * `POST /session` (added in [V1.31](#v131-api-changes) is no longer experimental. This endpoint can be used to run interactive long-running protocols between the client and the daemon. ## V1.38 API changes [Docker Engine API v1.38](https://docs.docker.com/engine/api/v1.38/) documentation * `GET /tasks` and `GET /tasks/{id}` now return a `NetworkAttachmentSpec` field, containing the `ContainerID` for non-service containers connected to "attachable" swarm-scoped networks. ## v1.37 API changes [Docker Engine API v1.37](https://docs.docker.com/engine/api/v1.37/) documentation * `POST /containers/create` and `POST /services/create` now supports exposing SCTP ports. * `POST /configs/create` and `POST /configs/{id}/create` now accept a `Templating` driver. * `GET /configs` and `GET /configs/{id}` now return the `Templating` driver of the config. * `POST /secrets/create` and `POST /secrets/{id}/create` now accept a `Templating` driver. * `GET /secrets` and `GET /secrets/{id}` now return the `Templating` driver of the secret. ## v1.36 API changes [Docker Engine API v1.36](https://docs.docker.com/engine/api/v1.36/) documentation * `Get /events` now return `exec_die` event when an exec process terminates. ## v1.35 API changes [Docker Engine API v1.35](https://docs.docker.com/engine/api/v1.35/) documentation * `POST /services/create` and `POST /services/(id)/update` now accepts an `Isolation` field on container spec to set the Isolation technology of the containers running the service (`default`, `process`, or `hyperv`). This configuration is only used for Windows containers. * `GET /containers/(name)/logs` now supports an additional query parameter: `until`, which returns log lines that occurred before the specified timestamp. * `POST /containers/{id}/exec` now accepts a `WorkingDir` property to set the work-dir for the exec process, independent of the container's work-dir. * `Get /version` now returns a `Platform.Name` field, which can be used by products using Moby as a foundation to return information about the platform. * `Get /version` now returns a `Components` field, which can be used to return information about the components used. Information about the engine itself is now included as a "Component" version, and contains all information from the top-level `Version`, `GitCommit`, `APIVersion`, `MinAPIVersion`, `GoVersion`, `Os`, `Arch`, `BuildTime`, `KernelVersion`, and `Experimental` fields. Going forward, the information from the `Components` section is preferred over their top-level counterparts. ## v1.34 API changes [Docker Engine API v1.34](https://docs.docker.com/engine/api/v1.34/) documentation * `POST /containers/(name)/wait?condition=removed` now also also returns in case of container removal failure. A pointer to a structure named `Error` added to the response JSON in order to indicate a failure. If `Error` is `null`, container removal has succeeded, otherwise the test of an error message indicating why container removal has failed is available from `Error.Message` field. ## v1.33 API changes [Docker Engine API v1.33](https://docs.docker.com/engine/api/v1.33/) documentation * `GET /events` now supports filtering 4 more kinds of events: `config`, `node`, `secret` and `service`. ## v1.32 API changes [Docker Engine API v1.32](https://docs.docker.com/engine/api/v1.32/) documentation * `POST /containers/create` now accepts additional values for the `HostConfig.IpcMode` property. New values are `private`, `shareable`, and `none`. * `DELETE /networks/{id or name}` fixed issue where a `name` equal to another network's name was able to mask that `id`. If both a network with the given _name_ exists, and a network with the given _id_, the network with the given _id_ is now deleted. This change is not versioned, and affects all API versions if the daemon has this patch. ## v1.31 API changes [Docker Engine API v1.31](https://docs.docker.com/engine/api/v1.31/) documentation * `DELETE /secrets/(name)` now returns status code 404 instead of 500 when the secret does not exist. * `POST /secrets/create` now returns status code 409 instead of 500 when creating an already existing secret. * `POST /secrets/create` now accepts a `Driver` struct, allowing the `Name` and driver-specific `Options` to be passed to store a secrets in an external secrets store. The `Driver` property can be omitted if the default (internal) secrets store is used. * `GET /secrets/(id)` and `GET /secrets` now return a `Driver` struct, containing the `Name` and driver-specific `Options` of the external secrets store used to store the secret. The `Driver` property is omitted if no external store is used. * `POST /secrets/(name)/update` now returns status code 400 instead of 500 when updating a secret's content which is not the labels. * `POST /nodes/(name)/update` now returns status code 400 instead of 500 when demoting last node fails. * `GET /networks/(id or name)` now takes an optional query parameter `scope` that will filter the network based on the scope (`local`, `swarm`, or `global`). * `POST /session` is a new endpoint that can be used for running interactive long-running protocols between client and the daemon. This endpoint is experimental and only available if the daemon is started with experimental features enabled. * `GET /images/(name)/get` now includes an `ImageMetadata` field which contains image metadata that is local to the engine and not part of the image config. * `POST /services/create` now accepts a `PluginSpec` when `TaskTemplate.Runtime` is set to `plugin` * `GET /events` now supports config events `create`, `update` and `remove` that are emitted when users create, update or remove a config * `GET /volumes/` and `GET /volumes/{name}` now return a `CreatedAt` field, containing the date/time the volume was created. This field is omitted if the creation date/time for the volume is unknown. For volumes with scope "global", this field represents the creation date/time of the local _instance_ of the volume, which may differ from instances of the same volume on different nodes. * `GET /system/df` now returns a `CreatedAt` field for `Volumes`. Refer to the `/volumes/` endpoint for a description of this field. ## v1.30 API changes [Docker Engine API v1.30](https://docs.docker.com/engine/api/v1.30/) documentation * `GET /info` now returns the list of supported logging drivers, including plugins. * `GET /info` and `GET /swarm` now returns the cluster-wide swarm CA info if the node is in a swarm: the cluster root CA certificate, and the cluster TLS leaf certificate issuer's subject and public key. It also displays the desired CA signing certificate, if any was provided as part of the spec. * `POST /build/` now (when not silent) produces an `Aux` message in the JSON output stream with payload `types.BuildResult` for each image produced. The final such message will reference the image resulting from the build. * `GET /nodes` and `GET /nodes/{id}` now returns additional information about swarm TLS info if the node is part of a swarm: the trusted root CA, and the issuer's subject and public key. * `GET /distribution/(name)/json` is a new endpoint that returns a JSON output stream with payload `types.DistributionInspect` for an image name. It includes a descriptor with the digest, and supported platforms retrieved from directly contacting the registry. * `POST /swarm/update` now accepts 3 additional parameters as part of the swarm spec's CA configuration; the desired CA certificate for the swarm, the desired CA key for the swarm (if not using an external certificate), and an optional parameter to force swarm to generate and rotate to a new CA certificate/key pair. * `POST /service/create` and `POST /services/(id or name)/update` now take the field `Platforms` as part of the service `Placement`, allowing to specify platforms supported by the service. * `POST /containers/(name)/wait` now accepts a `condition` query parameter to indicate which state change condition to wait for. Also, response headers are now returned immediately to acknowledge that the server has registered a wait callback for the client. * `POST /swarm/init` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic * `POST /swarm/join` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic * `GET /events` now supports service, node and secret events which are emitted when users create, update and remove service, node and secret * `GET /events` now supports network remove event which is emitted when users remove a swarm scoped network * `GET /events` now supports a filter type `scope` in which supported value could be swarm and local * `PUT /containers/(name)/archive` now accepts a `copyUIDGID` parameter to allow copy UID/GID maps to dest file or dir. ## v1.29 API changes [Docker Engine API v1.29](https://docs.docker.com/engine/api/v1.29/) documentation * `DELETE /networks/(name)` now allows to remove the ingress network, the one used to provide the routing-mesh. * `POST /networks/create` now supports creating the ingress network, by specifying an `Ingress` boolean field. As of now this is supported only when using the overlay network driver. * `GET /networks/(name)` now returns an `Ingress` field showing whether the network is the ingress one. * `GET /networks/` now supports a `scope` filter to filter networks based on the network mode (`swarm`, `global`, or `local`). * `POST /containers/create`, `POST /service/create` and `POST /services/(id or name)/update` now takes the field `StartPeriod` as a part of the `HealthConfig` allowing for specification of a period during which the container should not be considered unhealthy even if health checks do not pass. * `GET /services/(id)` now accepts an `insertDefaults` query-parameter to merge default values into the service inspect output. * `POST /containers/prune`, `POST /images/prune`, `POST /volumes/prune`, and `POST /networks/prune` now support a `label` filter to filter containers, images, volumes, or networks based on the label. The format of the label filter could be `label=<key>`/`label=<key>=<value>` to remove those with the specified labels, or `label!=<key>`/`label!=<key>=<value>` to remove those without the specified labels. * `POST /services/create` now accepts `Privileges` as part of `ContainerSpec`. Privileges currently include `CredentialSpec` and `SELinuxContext`. ## v1.28 API changes [Docker Engine API v1.28](https://docs.docker.com/engine/api/v1.28/) documentation * `POST /containers/create` now includes a `Consistency` field to specify the consistency level for each `Mount`, with possible values `default`, `consistent`, `cached`, or `delegated`. * `GET /containers/create` now takes a `DeviceCgroupRules` field in `HostConfig` allowing to set custom device cgroup rules for the created container. * Optional query parameter `verbose` for `GET /networks/(id or name)` will now list all services with all the tasks, including the non-local tasks on the given network. * `GET /containers/(id or name)/attach/ws` now returns WebSocket in binary frame format for API version >= v1.28, and returns WebSocket in text frame format for API version< v1.28, for the purpose of backward-compatibility. * `GET /networks` is optimised only to return list of all networks and network specific information. List of all containers attached to a specific network is removed from this API and is only available using the network specific `GET /networks/{network-id}`. * `GET /containers/json` now supports `publish` and `expose` filters to filter containers that expose or publish certain ports. * `POST /services/create` and `POST /services/(id or name)/update` now accept the `ReadOnly` parameter, which mounts the container's root filesystem as read only. * `POST /build` now accepts `extrahosts` parameter to specify a host to ip mapping to use during the build. * `POST /services/create` and `POST /services/(id or name)/update` now accept a `rollback` value for `FailureAction`. * `POST /services/create` and `POST /services/(id or name)/update` now accept an optional `RollbackConfig` object which specifies rollback options. * `GET /services` now supports a `mode` filter to filter services based on the service mode (either `global` or `replicated`). * `POST /containers/(name)/update` now supports updating `NanoCpus` that represents CPU quota in units of 10<sup>-9</sup> CPUs. ## v1.27 API changes [Docker Engine API v1.27](https://docs.docker.com/engine/api/v1.27/) documentation * `GET /containers/(id or name)/stats` now includes an `online_cpus` field in both `precpu_stats` and `cpu_stats`. If this field is `nil` then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. ## v1.26 API changes [Docker Engine API v1.26](https://docs.docker.com/engine/api/v1.26/) documentation * `POST /plugins/(plugin name)/upgrade` upgrade a plugin. ## v1.25 API changes [Docker Engine API v1.25](https://docs.docker.com/engine/api/v1.25/) documentation * The API version is now required in all API calls. Instead of just requesting, for example, the URL `/containers/json`, you must now request `/v1.25/containers/json`. * `GET /version` now returns `MinAPIVersion`. * `POST /build` accepts `networkmode` parameter to specify network used during build. * `GET /images/(name)/json` now returns `OsVersion` if populated * `GET /info` now returns `Isolation`. * `POST /containers/create` now takes `AutoRemove` in HostConfig, to enable auto-removal of the container on daemon side when the container's process exits. * `GET /containers/json` and `GET /containers/(id or name)/json` now return `"removing"` as a value for the `State.Status` field if the container is being removed. Previously, "exited" was returned as status. * `GET /containers/json` now accepts `removing` as a valid value for the `status` filter. * `GET /containers/json` now supports filtering containers by `health` status. * `DELETE /volumes/(name)` now accepts a `force` query parameter to force removal of volumes that were already removed out of band by the volume driver plugin. * `POST /containers/create/` and `POST /containers/(name)/update` now validates restart policies. * `POST /containers/create` now validates IPAMConfig in NetworkingConfig, and returns error for invalid IPv4 and IPv6 addresses (`--ip` and `--ip6` in `docker create/run`). * `POST /containers/create` now takes a `Mounts` field in `HostConfig` which replaces `Binds`, `Volumes`, and `Tmpfs`. *note*: `Binds`, `Volumes`, and `Tmpfs` are still available and can be combined with `Mounts`. * `POST /build` now performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. Note that this change is _unversioned_ and applied to all API versions. * `POST /build` accepts `cachefrom` parameter to specify images used for build cache. * `GET /networks/` endpoint now correctly returns a list of *all* networks, instead of the default network if a trailing slash is provided, but no `name` or `id`. * `DELETE /containers/(name)` endpoint now returns an error of `removal of container name is already in progress` with status code of 400, when container name is in a state of removal in progress. * `GET /containers/json` now supports a `is-task` filter to filter containers that are tasks (part of a service in swarm mode). * `POST /containers/create` now takes `StopTimeout` field. * `POST /services/create` and `POST /services/(id or name)/update` now accept `Monitor` and `MaxFailureRatio` parameters, which control the response to failures during service updates. * `POST /services/(id or name)/update` now accepts a `ForceUpdate` parameter inside the `TaskTemplate`, which causes the service to be updated even if there are no changes which would ordinarily trigger an update. * `POST /services/create` and `POST /services/(id or name)/update` now return a `Warnings` array. * `GET /networks/(name)` now returns field `Created` in response to show network created time. * `POST /containers/(id or name)/exec` now accepts an `Env` field, which holds a list of environment variables to be set in the context of the command execution. * `GET /volumes`, `GET /volumes/(name)`, and `POST /volumes/create` now return the `Options` field which holds the driver specific options to use for when creating the volume. * `GET /exec/(id)/json` now returns `Pid`, which is the system pid for the exec'd process. * `POST /containers/prune` prunes stopped containers. * `POST /images/prune` prunes unused images. * `POST /volumes/prune` prunes unused volumes. * `POST /networks/prune` prunes unused networks. * Every API response now includes a `Docker-Experimental` header specifying if experimental features are enabled (value can be `true` or `false`). * Every API response now includes a `API-Version` header specifying the default API version of the server. * The `hostConfig` option now accepts the fields `CpuRealtimePeriod` and `CpuRtRuntime` to allocate cpu runtime to rt tasks when `CONFIG_RT_GROUP_SCHED` is enabled in the kernel. * The `SecurityOptions` field within the `GET /info` response now includes `userns` if user namespaces are enabled in the daemon. * `GET /nodes` and `GET /node/(id or name)` now return `Addr` as part of a node's `Status`, which is the address that that node connects to the manager from. * The `HostConfig` field now includes `NanoCpus` that represents CPU quota in units of 10<sup>-9</sup> CPUs. * `GET /info` now returns more structured information about security options. * The `HostConfig` field now includes `CpuCount` that represents the number of CPUs available for execution by the container. Windows daemon only. * `POST /services/create` and `POST /services/(id or name)/update` now accept the `TTY` parameter, which allocate a pseudo-TTY in container. * `POST /services/create` and `POST /services/(id or name)/update` now accept the `DNSConfig` parameter, which specifies DNS related configurations in resolver configuration file (resolv.conf) through `Nameservers`, `Search`, and `Options`. * `POST /services/create` and `POST /services/(id or name)/update` now support `node.platform.arch` and `node.platform.os` constraints in the services `TaskSpec.Placement.Constraints` field. * `GET /networks/(id or name)` now includes IP and name of all peers nodes for swarm mode overlay networks. * `GET /plugins` list plugins. * `POST /plugins/pull?name=<plugin name>` pulls a plugin. * `GET /plugins/(plugin name)` inspect a plugin. * `POST /plugins/(plugin name)/set` configure a plugin. * `POST /plugins/(plugin name)/enable` enable a plugin. * `POST /plugins/(plugin name)/disable` disable a plugin. * `POST /plugins/(plugin name)/push` push a plugin. * `POST /plugins/create?name=(plugin name)` create a plugin. * `DELETE /plugins/(plugin name)` delete a plugin. * `POST /node/(id or name)/update` now accepts both `id` or `name` to identify the node to update. * `GET /images/json` now support a `reference` filter. * `GET /secrets` returns information on the secrets. * `POST /secrets/create` creates a secret. * `DELETE /secrets/{id}` removes the secret `id`. * `GET /secrets/{id}` returns information on the secret `id`. * `POST /secrets/{id}/update` updates the secret `id`. * `POST /services/(id or name)/update` now accepts service name or prefix of service id as a parameter. * `POST /containers/create` added 2 built-in log-opts that work on all logging drivers, `mode` (`blocking`|`non-blocking`), and `max-buffer-size` (e.g. `2m`) which enables a non-blocking log buffer. * `POST /containers/create` now takes `HostConfig.Init` field to run an init inside the container that forwards signals and reaps processes. ## v1.24 API changes [Docker Engine API v1.24](v1.24.md) documentation * `POST /containers/create` now takes `StorageOpt` field. * `GET /info` now returns `SecurityOptions` field, showing if `apparmor`, `seccomp`, or `selinux` is supported. * `GET /info` no longer returns the `ExecutionDriver` property. This property was no longer used after integration with ContainerD in Docker 1.11. * `GET /networks` now supports filtering by `label` and `driver`. * `GET /containers/json` now supports filtering containers by `network` name or id. * `POST /containers/create` now takes `IOMaximumBandwidth` and `IOMaximumIOps` fields. Windows daemon only. * `POST /containers/create` now returns an HTTP 400 "bad parameter" message if no command is specified (instead of an HTTP 500 "server error") * `GET /images/search` now takes a `filters` query parameter. * `GET /events` now supports a `reload` event that is emitted when the daemon configuration is reloaded. * `GET /events` now supports filtering by daemon name or ID. * `GET /events` now supports a `detach` event that is emitted on detaching from container process. * `GET /events` now supports an `exec_detach ` event that is emitted on detaching from exec process. * `GET /images/json` now supports filters `since` and `before`. * `POST /containers/(id or name)/start` no longer accepts a `HostConfig`. * `POST /images/(name)/tag` no longer has a `force` query parameter. * `GET /images/search` now supports maximum returned search results `limit`. * `POST /containers/{name:.*}/copy` is now removed and errors out starting from this API version. * API errors are now returned as JSON instead of plain text. * `POST /containers/create` and `POST /containers/(id)/start` allow you to configure kernel parameters (sysctls) for use in the container. * `POST /containers/<container ID>/exec` and `POST /exec/<exec ID>/start` no longer expects a "Container" field to be present. This property was not used and is no longer sent by the docker client. * `POST /containers/create/` now validates the hostname (should be a valid RFC 1123 hostname). * `POST /containers/create/` `HostConfig.PidMode` field now accepts `container:<name|id>`, to have the container join the PID namespace of an existing container. ## v1.23 API changes [Docker Engine API v1.23](v1.23.md) documentation * `GET /containers/json` returns the state of the container, one of `created`, `restarting`, `running`, `paused`, `exited` or `dead`. * `GET /containers/json` returns the mount points for the container. * `GET /networks/(name)` now returns an `Internal` field showing whether the network is internal or not. * `GET /networks/(name)` now returns an `EnableIPv6` field showing whether the network has ipv6 enabled or not. * `POST /containers/(name)/update` now supports updating container's restart policy. * `POST /networks/create` now supports enabling ipv6 on the network by setting the `EnableIPv6` field (doing this with a label will no longer work). * `GET /info` now returns `CgroupDriver` field showing what cgroup driver the daemon is using; `cgroupfs` or `systemd`. * `GET /info` now returns `KernelMemory` field, showing if "kernel memory limit" is supported. * `POST /containers/create` now takes `PidsLimit` field, if the kernel is >= 4.3 and the pids cgroup is supported. * `GET /containers/(id or name)/stats` now returns `pids_stats`, if the kernel is >= 4.3 and the pids cgroup is supported. * `POST /containers/create` now allows you to override usernamespaces remapping and use privileged options for the container. * `POST /containers/create` now allows specifying `nocopy` for named volumes, which disables automatic copying from the container path to the volume. * `POST /auth` now returns an `IdentityToken` when supported by a registry. * `POST /containers/create` with both `Hostname` and `Domainname` fields specified will result in the container's hostname being set to `Hostname`, rather than `Hostname.Domainname`. * `GET /volumes` now supports more filters, new added filters are `name` and `driver`. * `GET /containers/(id or name)/logs` now accepts a `details` query parameter to stream the extra attributes that were provided to the containers `LogOpts`, such as environment variables and labels, with the logs. * `POST /images/load` now returns progress information as a JSON stream, and has a `quiet` query parameter to suppress progress details. ## v1.22 API changes [Docker Engine API v1.22](v1.22.md) documentation * `POST /container/(name)/update` updates the resources of a container. * `GET /containers/json` supports filter `isolation` on Windows. * `GET /containers/json` now returns the list of networks of containers. * `GET /info` Now returns `Architecture` and `OSType` fields, providing information about the host architecture and operating system type that the daemon runs on. * `GET /networks/(name)` now returns a `Name` field for each container attached to the network. * `GET /version` now returns the `BuildTime` field in RFC3339Nano format to make it consistent with other date/time values returned by the API. * `AuthConfig` now supports a `registrytoken` for token based authentication * `POST /containers/create` now has a 4M minimum value limit for `HostConfig.KernelMemory` * Pushes initiated with `POST /images/(name)/push` and pulls initiated with `POST /images/create` will be cancelled if the HTTP connection making the API request is closed before the push or pull completes. * `POST /containers/create` now allows you to set a read/write rate limit for a device (in bytes per second or IO per second). * `GET /networks` now supports filtering by `name`, `id` and `type`. * `POST /containers/create` now allows you to set the static IPv4 and/or IPv6 address for the container. * `POST /networks/(id)/connect` now allows you to set the static IPv4 and/or IPv6 address for the container. * `GET /info` now includes the number of containers running, stopped, and paused. * `POST /networks/create` now supports restricting external access to the network by setting the `Internal` field. * `POST /networks/(id)/disconnect` now includes a `Force` option to forcefully disconnect a container from network * `GET /containers/(id)/json` now returns the `NetworkID` of containers. * `POST /networks/create` Now supports an options field in the IPAM config that provides options for custom IPAM plugins. * `GET /networks/{network-id}` Now returns IPAM config options for custom IPAM plugins if any are available. * `GET /networks/<network-id>` now returns subnets info for user-defined networks. * `GET /info` can now return a `SystemStatus` field useful for returning additional information about applications that are built on top of engine. ## v1.21 API changes [Docker Engine API v1.21](v1.21.md) documentation * `GET /volumes` lists volumes from all volume drivers. * `POST /volumes/create` to create a volume. * `GET /volumes/(name)` get low-level information about a volume. * `DELETE /volumes/(name)` remove a volume with the specified name. * `VolumeDriver` was moved from `config` to `HostConfig` to make the configuration portable. * `GET /images/(name)/json` now returns information about an image's `RepoTags` and `RepoDigests`. * The `config` option now accepts the field `StopSignal`, which specifies the signal to use to kill a container. * `GET /containers/(id)/stats` will return networking information respectively for each interface. * The `HostConfig` option now includes the `DnsOptions` field to configure the container's DNS options. * `POST /build` now optionally takes a serialized map of build-time variables. * `GET /events` now includes a `timenano` field, in addition to the existing `time` field. * `GET /events` now supports filtering by image and container labels. * `GET /info` now lists engine version information and return the information of `CPUShares` and `Cpuset`. * `GET /containers/json` will return `ImageID` of the image used by container. * `POST /exec/(name)/start` will now return an HTTP 409 when the container is either stopped or paused. * `POST /containers/create` now takes `KernelMemory` in HostConfig to specify kernel memory limit. * `GET /containers/(name)/json` now accepts a `size` parameter. Setting this parameter to '1' returns container size information in the `SizeRw` and `SizeRootFs` fields. * `GET /containers/(name)/json` now returns a `NetworkSettings.Networks` field, detailing network settings per network. This field deprecates the `NetworkSettings.Gateway`, `NetworkSettings.IPAddress`, `NetworkSettings.IPPrefixLen`, and `NetworkSettings.MacAddress` fields, which are still returned for backward-compatibility, but will be removed in a future version. * `GET /exec/(id)/json` now returns a `NetworkSettings.Networks` field, detailing networksettings per network. This field deprecates the `NetworkSettings.Gateway`, `NetworkSettings.IPAddress`, `NetworkSettings.IPPrefixLen`, and `NetworkSettings.MacAddress` fields, which are still returned for backward-compatibility, but will be removed in a future version. * The `HostConfig` option now includes the `OomScoreAdj` field for adjusting the badness heuristic. This heuristic selects which processes the OOM killer kills under out-of-memory conditions. ## v1.20 API changes [Docker Engine API v1.20](v1.20.md) documentation * `GET /containers/(id)/archive` get an archive of filesystem content from a container. * `PUT /containers/(id)/archive` upload an archive of content to be extracted to an existing directory inside a container's filesystem. * `POST /containers/(id)/copy` is deprecated in favor of the above `archive` endpoint which can be used to download files and directories from a container. * The `hostConfig` option now accepts the field `GroupAdd`, which specifies a list of additional groups that the container process will run as. ## v1.19 API changes [Docker Engine API v1.19](v1.19.md) documentation * When the daemon detects a version mismatch with the client, usually when the client is newer than the daemon, an HTTP 400 is now returned instead of a 404. * `GET /containers/(id)/stats` now accepts `stream` bool to get only one set of stats and disconnect. * `GET /containers/(id)/logs` now accepts a `since` timestamp parameter. * `GET /info` The fields `Debug`, `IPv4Forwarding`, `MemoryLimit`, and `SwapLimit` are now returned as boolean instead of as an int. In addition, the end point now returns the new boolean fields `CpuCfsPeriod`, `CpuCfsQuota`, and `OomKillDisable`. * The `hostConfig` option now accepts the fields `CpuPeriod` and `CpuQuota` * `POST /build` accepts `cpuperiod` and `cpuquota` options ## v1.18 API changes [Docker Engine API v1.18](v1.18.md) documentation * `GET /version` now returns `Os`, `Arch` and `KernelVersion`. * `POST /containers/create` and `POST /containers/(id)/start`allow you to set ulimit settings for use in the container. * `GET /info` now returns `SystemTime`, `HttpProxy`,`HttpsProxy` and `NoProxy`. * `GET /images/json` added a `RepoDigests` field to include image digest information. * `POST /build` can now set resource constraints for all containers created for the build. * `CgroupParent` can be passed in the host config to setup container cgroups under a specific cgroup. * `POST /build` closing the HTTP request cancels the build * `POST /containers/(id)/exec` includes `Warnings` field to response.
--- title: "Engine API version history" description: "Documentation of changes that have been made to Engine API." keywords: "API, Docker, rcli, REST, documentation" --- <!-- This file is maintained within the moby/moby GitHub repository at https://github.com/moby/moby/. Make all pull requests against that repo. If you see this file in another repository, consider it read-only there, as it will periodically be overwritten by the definitive file. Pull requests which include edits to this file in other repositories will be rejected. --> ## v1.42 API changes [Docker Engine API v1.42](https://docs.docker.com/engine/api/v1.42/) documentation * Removed the `BuilderSize` field on the `GET /system/df` endpoint. This field was introduced in API 1.31 as part of an experimental feature, and no longer used since API 1.40. Use field `BuildCache` instead to track storage used by the builder component. ## v1.41 API changes [Docker Engine API v1.41](https://docs.docker.com/engine/api/v1.41/) documentation * `GET /events` now returns `prune` events after pruning resources have completed. Prune events are returned for `container`, `network`, `volume`, `image`, and `builder`, and have a `reclaimed` attribute, indicating the amount of space reclaimed (in bytes). * `GET /info` now returns a `CgroupVersion` field, containing the cgroup version. * `GET /info` now returns a `DefaultAddressPools` field, containing a list of custom default address pools for local networks, which can be specified in the `daemon.json` file or `--default-address-pool` dockerd option. * `POST /services/create` and `POST /services/{id}/update` now supports `BindOptions.NonRecursive`. * The `ClusterStore` and `ClusterAdvertise` fields in `GET /info` are deprecated and are now omitted if they contain an empty value. This change is not versioned, and affects all API versions if the daemon has this patch. * The `filter` (singular) query parameter, which was deprecated in favor of the `filters` option in Docker 1.13, has now been removed from the `GET /images/json` endpoint. The parameter remains available when using API version 1.40 or below. * `GET /services` now returns `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `GET /services/{id}` now returns `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `POST /services/create` now accepts `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `POST /services/{id}/update` now accepts `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `GET /tasks` now returns `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `GET /tasks/{id}` now returns `CapAdd` and `CapDrop` as part of the `ContainerSpec`. * `GET /services` now returns `Pids` in `TaskTemplate.Resources.Limits`. * `GET /services/{id}` now returns `Pids` in `TaskTemplate.Resources.Limits`. * `POST /services/create` now accepts `Pids` in `TaskTemplate.Resources.Limits`. * `POST /services/{id}/update` now accepts `Pids` in `TaskTemplate.Resources.Limits` to limit the maximum number of PIDs. * `GET /tasks` now returns `Pids` in `TaskTemplate.Resources.Limits`. * `GET /tasks/{id}` now returns `Pids` in `TaskTemplate.Resources.Limits`. * `POST /containers/create` on Linux now accepts the `HostConfig.CgroupnsMode` property. Set the property to `host` to create the container in the daemon's cgroup namespace, or `private` to create the container in its own private cgroup namespace. The per-daemon default is `host`, and can be changed by using the`CgroupNamespaceMode` daemon configuration parameter. * `GET /info` now returns an `OSVersion` field, containing the operating system's version. This change is not versioned, and affects all API versions if the daemon has this patch. * `GET /info` no longer returns the `SystemStatus` field if it does not have a value set. This change is not versioned, and affects all API versions if the daemon has this patch. * `GET /services` now accepts query parameter `status`. When set `true`, services returned will include `ServiceStatus`, which provides Desired, Running, and Completed task counts for the service. * `GET /services` may now include `ReplicatedJob` or `GlobalJob` as the `Mode` in a `ServiceSpec`. * `GET /services/{id}` may now include `ReplicatedJob` or `GlobalJob` as the `Mode` in a `ServiceSpec`. * `POST /services/create` now accepts `ReplicatedJob or `GlobalJob` as the `Mode` in the `ServiceSpec. * `POST /services/{id}/update` accepts updating the fields of the `ReplicatedJob` object in the `ServiceSpec.Mode`. The service mode still cannot be changed, however. * `GET /services` now includes `JobStatus` on Services with mode `ReplicatedJob` or `GlobalJob`. * `GET /services/{id}` now includes `JobStatus` on Services with mode `ReplicatedJob` or `GlobalJob`. * `GET /tasks` now includes `JobIteration` on Tasks spawned from a job-mode service. * `GET /tasks/{id}` now includes `JobIteration` on the task if spawned from a job-mode service. * `GET /containers/{id}/stats` now accepts a query param (`one-shot`) which, when used with `stream=false` fetches a single set of stats instead of waiting for two collection cycles to have 2 CPU stats over a 1 second period. * The `KernelMemory` field in `HostConfig.Resources` is now deprecated. * The `KernelMemory` field in `Info` is now deprecated. * `GET /services` now returns `Ulimits` as part of `ContainerSpec`. * `GET /services/{id}` now returns `Ulimits` as part of `ContainerSpec`. * `POST /services/create` now accepts `Ulimits` as part of `ContainerSpec`. * `POST /services/{id}/update` now accepts `Ulimits` as part of `ContainerSpec`. ## v1.40 API changes [Docker Engine API v1.40](https://docs.docker.com/engine/api/v1.40/) documentation * The `/_ping` endpoint can now be accessed both using `GET` or `HEAD` requests. when accessed using a `HEAD` request, all headers are returned, but the body is empty (`Content-Length: 0`). This change is not versioned, and affects all API versions if the daemon has this patch. Clients are recommended to try using `HEAD`, but fallback to `GET` if the `HEAD` requests fails. * `GET /_ping` and `HEAD /_ping` now set `Cache-Control` and `Pragma` headers to prevent the result from being cached. This change is not versioned, and affects all API versions if the daemon has this patch. * `GET /services` now returns `Sysctls` as part of the `ContainerSpec`. * `GET /services/{id}` now returns `Sysctls` as part of the `ContainerSpec`. * `POST /services/create` now accepts `Sysctls` as part of the `ContainerSpec`. * `POST /services/{id}/update` now accepts `Sysctls` as part of the `ContainerSpec`. * `POST /services/create` now accepts `Config` as part of `ContainerSpec.Privileges.CredentialSpec`. * `POST /services/{id}/update` now accepts `Config` as part of `ContainerSpec.Privileges.CredentialSpec`. * `POST /services/create` now includes `Runtime` as an option in `ContainerSpec.Configs` * `POST /services/{id}/update` now includes `Runtime` as an option in `ContainerSpec.Configs` * `GET /tasks` now returns `Sysctls` as part of the `ContainerSpec`. * `GET /tasks/{id}` now returns `Sysctls` as part of the `ContainerSpec`. * `GET /networks` now supports a `dangling` filter type. When set to `true` (or `1`), the endpoint returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. * `GET /nodes` now supports a filter type `node.label` filter to filter nodes based on the node.label. The format of the label filter is `node.label=<key>`/`node.label=<key>=<value>` to return those with the specified labels, or `node.label!=<key>`/`node.label!=<key>=<value>` to return those without the specified labels. * `POST /containers/create` now accepts a `fluentd-async` option in `HostConfig.LogConfig.Config` when using the Fluentd logging driver. This option deprecates the `fluentd-async-connect` option, which remains funtional, but will be removed in a future release. Users are encouraged to use the `fluentd-async` option going forward. This change is not versioned, and affects all API versions if the daemon has this patch. * `POST /containers/create` now accepts a `fluentd-request-ack` option in `HostConfig.LogConfig.Config` when using the Fluentd logging driver. If enabled, the Fluentd logging driver sends the chunk option with a unique ID. The server will respond with an acknowledgement. This option improves the reliability of the message transmission. This change is not versioned, and affects all API versions if the daemon has this patch. * `POST /containers/create`, `GET /containers/{id}/json`, and `GET /containers/json` now supports `BindOptions.NonRecursive`. * `POST /swarm/init` now accepts a `DataPathPort` property to set data path port number. * `GET /info` now returns information about `DataPathPort` that is currently used in swarm * `GET /info` now returns `PidsLimit` boolean to indicate if the host kernel has PID limit support enabled. * `GET /info` now includes `name=rootless` in `SecurityOptions` when the daemon is running in rootless mode. This change is not versioned, and affects all API versions if the daemon has this patch. * `GET /info` now returns `none` as `CgroupDriver` when the daemon is running in rootless mode. This change is not versioned, and affects all API versions if the daemon has this patch. * `POST /containers/create` now accepts `DeviceRequests` as part of `HostConfig`. Can be used to set Nvidia GPUs. * `GET /swarm` endpoint now returns DataPathPort info * `POST /containers/create` now takes `KernelMemoryTCP` field to set hard limit for kernel TCP buffer memory. * `GET /service` now returns `MaxReplicas` as part of the `Placement`. * `GET /service/{id}` now returns `MaxReplicas` as part of the `Placement`. * `POST /service/create` and `POST /services/(id or name)/update` now take the field `MaxReplicas` as part of the service `Placement`, allowing to specify maximum replicas per node for the service. * `POST /containers/create` on Linux now creates a container with `HostConfig.IpcMode=private` by default, if IpcMode is not explicitly specified. The per-daemon default can be changed back to `shareable` by using `DefaultIpcMode` daemon configuration parameter. * `POST /containers/{id}/update` now accepts a `PidsLimit` field to tune a container's PID limit. Set `0` or `-1` for unlimited. Leave `null` to not change the current value. * `POST /build` now accepts `outputs` key for configuring build outputs when using BuildKit mode. ## V1.39 API changes [Docker Engine API v1.39](https://docs.docker.com/engine/api/v1.39/) documentation * `GET /info` now returns an empty string, instead of `<unknown>` for `KernelVersion` and `OperatingSystem` if the daemon was unable to obtain this information. * `GET /info` now returns information about the product license, if a license has been applied to the daemon. * `GET /info` now returns a `Warnings` field, containing warnings and informational messages about missing features, or issues related to the daemon configuration. * `POST /swarm/init` now accepts a `DefaultAddrPool` property to set global scope default address pool * `POST /swarm/init` now accepts a `SubnetSize` property to set global scope networks by giving the length of the subnet masks for every such network * `POST /session` (added in [V1.31](#v131-api-changes) is no longer experimental. This endpoint can be used to run interactive long-running protocols between the client and the daemon. ## V1.38 API changes [Docker Engine API v1.38](https://docs.docker.com/engine/api/v1.38/) documentation * `GET /tasks` and `GET /tasks/{id}` now return a `NetworkAttachmentSpec` field, containing the `ContainerID` for non-service containers connected to "attachable" swarm-scoped networks. ## v1.37 API changes [Docker Engine API v1.37](https://docs.docker.com/engine/api/v1.37/) documentation * `POST /containers/create` and `POST /services/create` now supports exposing SCTP ports. * `POST /configs/create` and `POST /configs/{id}/create` now accept a `Templating` driver. * `GET /configs` and `GET /configs/{id}` now return the `Templating` driver of the config. * `POST /secrets/create` and `POST /secrets/{id}/create` now accept a `Templating` driver. * `GET /secrets` and `GET /secrets/{id}` now return the `Templating` driver of the secret. ## v1.36 API changes [Docker Engine API v1.36](https://docs.docker.com/engine/api/v1.36/) documentation * `Get /events` now return `exec_die` event when an exec process terminates. ## v1.35 API changes [Docker Engine API v1.35](https://docs.docker.com/engine/api/v1.35/) documentation * `POST /services/create` and `POST /services/(id)/update` now accepts an `Isolation` field on container spec to set the Isolation technology of the containers running the service (`default`, `process`, or `hyperv`). This configuration is only used for Windows containers. * `GET /containers/(name)/logs` now supports an additional query parameter: `until`, which returns log lines that occurred before the specified timestamp. * `POST /containers/{id}/exec` now accepts a `WorkingDir` property to set the work-dir for the exec process, independent of the container's work-dir. * `Get /version` now returns a `Platform.Name` field, which can be used by products using Moby as a foundation to return information about the platform. * `Get /version` now returns a `Components` field, which can be used to return information about the components used. Information about the engine itself is now included as a "Component" version, and contains all information from the top-level `Version`, `GitCommit`, `APIVersion`, `MinAPIVersion`, `GoVersion`, `Os`, `Arch`, `BuildTime`, `KernelVersion`, and `Experimental` fields. Going forward, the information from the `Components` section is preferred over their top-level counterparts. ## v1.34 API changes [Docker Engine API v1.34](https://docs.docker.com/engine/api/v1.34/) documentation * `POST /containers/(name)/wait?condition=removed` now also also returns in case of container removal failure. A pointer to a structure named `Error` added to the response JSON in order to indicate a failure. If `Error` is `null`, container removal has succeeded, otherwise the test of an error message indicating why container removal has failed is available from `Error.Message` field. ## v1.33 API changes [Docker Engine API v1.33](https://docs.docker.com/engine/api/v1.33/) documentation * `GET /events` now supports filtering 4 more kinds of events: `config`, `node`, `secret` and `service`. ## v1.32 API changes [Docker Engine API v1.32](https://docs.docker.com/engine/api/v1.32/) documentation * `POST /containers/create` now accepts additional values for the `HostConfig.IpcMode` property. New values are `private`, `shareable`, and `none`. * `DELETE /networks/{id or name}` fixed issue where a `name` equal to another network's name was able to mask that `id`. If both a network with the given _name_ exists, and a network with the given _id_, the network with the given _id_ is now deleted. This change is not versioned, and affects all API versions if the daemon has this patch. ## v1.31 API changes [Docker Engine API v1.31](https://docs.docker.com/engine/api/v1.31/) documentation * `DELETE /secrets/(name)` now returns status code 404 instead of 500 when the secret does not exist. * `POST /secrets/create` now returns status code 409 instead of 500 when creating an already existing secret. * `POST /secrets/create` now accepts a `Driver` struct, allowing the `Name` and driver-specific `Options` to be passed to store a secrets in an external secrets store. The `Driver` property can be omitted if the default (internal) secrets store is used. * `GET /secrets/(id)` and `GET /secrets` now return a `Driver` struct, containing the `Name` and driver-specific `Options` of the external secrets store used to store the secret. The `Driver` property is omitted if no external store is used. * `POST /secrets/(name)/update` now returns status code 400 instead of 500 when updating a secret's content which is not the labels. * `POST /nodes/(name)/update` now returns status code 400 instead of 500 when demoting last node fails. * `GET /networks/(id or name)` now takes an optional query parameter `scope` that will filter the network based on the scope (`local`, `swarm`, or `global`). * `POST /session` is a new endpoint that can be used for running interactive long-running protocols between client and the daemon. This endpoint is experimental and only available if the daemon is started with experimental features enabled. * `GET /images/(name)/get` now includes an `ImageMetadata` field which contains image metadata that is local to the engine and not part of the image config. * `POST /services/create` now accepts a `PluginSpec` when `TaskTemplate.Runtime` is set to `plugin` * `GET /events` now supports config events `create`, `update` and `remove` that are emitted when users create, update or remove a config * `GET /volumes/` and `GET /volumes/{name}` now return a `CreatedAt` field, containing the date/time the volume was created. This field is omitted if the creation date/time for the volume is unknown. For volumes with scope "global", this field represents the creation date/time of the local _instance_ of the volume, which may differ from instances of the same volume on different nodes. * `GET /system/df` now returns a `CreatedAt` field for `Volumes`. Refer to the `/volumes/` endpoint for a description of this field. ## v1.30 API changes [Docker Engine API v1.30](https://docs.docker.com/engine/api/v1.30/) documentation * `GET /info` now returns the list of supported logging drivers, including plugins. * `GET /info` and `GET /swarm` now returns the cluster-wide swarm CA info if the node is in a swarm: the cluster root CA certificate, and the cluster TLS leaf certificate issuer's subject and public key. It also displays the desired CA signing certificate, if any was provided as part of the spec. * `POST /build/` now (when not silent) produces an `Aux` message in the JSON output stream with payload `types.BuildResult` for each image produced. The final such message will reference the image resulting from the build. * `GET /nodes` and `GET /nodes/{id}` now returns additional information about swarm TLS info if the node is part of a swarm: the trusted root CA, and the issuer's subject and public key. * `GET /distribution/(name)/json` is a new endpoint that returns a JSON output stream with payload `types.DistributionInspect` for an image name. It includes a descriptor with the digest, and supported platforms retrieved from directly contacting the registry. * `POST /swarm/update` now accepts 3 additional parameters as part of the swarm spec's CA configuration; the desired CA certificate for the swarm, the desired CA key for the swarm (if not using an external certificate), and an optional parameter to force swarm to generate and rotate to a new CA certificate/key pair. * `POST /service/create` and `POST /services/(id or name)/update` now take the field `Platforms` as part of the service `Placement`, allowing to specify platforms supported by the service. * `POST /containers/(name)/wait` now accepts a `condition` query parameter to indicate which state change condition to wait for. Also, response headers are now returned immediately to acknowledge that the server has registered a wait callback for the client. * `POST /swarm/init` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic * `POST /swarm/join` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic * `GET /events` now supports service, node and secret events which are emitted when users create, update and remove service, node and secret * `GET /events` now supports network remove event which is emitted when users remove a swarm scoped network * `GET /events` now supports a filter type `scope` in which supported value could be swarm and local * `PUT /containers/(name)/archive` now accepts a `copyUIDGID` parameter to allow copy UID/GID maps to dest file or dir. ## v1.29 API changes [Docker Engine API v1.29](https://docs.docker.com/engine/api/v1.29/) documentation * `DELETE /networks/(name)` now allows to remove the ingress network, the one used to provide the routing-mesh. * `POST /networks/create` now supports creating the ingress network, by specifying an `Ingress` boolean field. As of now this is supported only when using the overlay network driver. * `GET /networks/(name)` now returns an `Ingress` field showing whether the network is the ingress one. * `GET /networks/` now supports a `scope` filter to filter networks based on the network mode (`swarm`, `global`, or `local`). * `POST /containers/create`, `POST /service/create` and `POST /services/(id or name)/update` now takes the field `StartPeriod` as a part of the `HealthConfig` allowing for specification of a period during which the container should not be considered unhealthy even if health checks do not pass. * `GET /services/(id)` now accepts an `insertDefaults` query-parameter to merge default values into the service inspect output. * `POST /containers/prune`, `POST /images/prune`, `POST /volumes/prune`, and `POST /networks/prune` now support a `label` filter to filter containers, images, volumes, or networks based on the label. The format of the label filter could be `label=<key>`/`label=<key>=<value>` to remove those with the specified labels, or `label!=<key>`/`label!=<key>=<value>` to remove those without the specified labels. * `POST /services/create` now accepts `Privileges` as part of `ContainerSpec`. Privileges currently include `CredentialSpec` and `SELinuxContext`. ## v1.28 API changes [Docker Engine API v1.28](https://docs.docker.com/engine/api/v1.28/) documentation * `POST /containers/create` now includes a `Consistency` field to specify the consistency level for each `Mount`, with possible values `default`, `consistent`, `cached`, or `delegated`. * `GET /containers/create` now takes a `DeviceCgroupRules` field in `HostConfig` allowing to set custom device cgroup rules for the created container. * Optional query parameter `verbose` for `GET /networks/(id or name)` will now list all services with all the tasks, including the non-local tasks on the given network. * `GET /containers/(id or name)/attach/ws` now returns WebSocket in binary frame format for API version >= v1.28, and returns WebSocket in text frame format for API version< v1.28, for the purpose of backward-compatibility. * `GET /networks` is optimised only to return list of all networks and network specific information. List of all containers attached to a specific network is removed from this API and is only available using the network specific `GET /networks/{network-id}`. * `GET /containers/json` now supports `publish` and `expose` filters to filter containers that expose or publish certain ports. * `POST /services/create` and `POST /services/(id or name)/update` now accept the `ReadOnly` parameter, which mounts the container's root filesystem as read only. * `POST /build` now accepts `extrahosts` parameter to specify a host to ip mapping to use during the build. * `POST /services/create` and `POST /services/(id or name)/update` now accept a `rollback` value for `FailureAction`. * `POST /services/create` and `POST /services/(id or name)/update` now accept an optional `RollbackConfig` object which specifies rollback options. * `GET /services` now supports a `mode` filter to filter services based on the service mode (either `global` or `replicated`). * `POST /containers/(name)/update` now supports updating `NanoCpus` that represents CPU quota in units of 10<sup>-9</sup> CPUs. ## v1.27 API changes [Docker Engine API v1.27](https://docs.docker.com/engine/api/v1.27/) documentation * `GET /containers/(id or name)/stats` now includes an `online_cpus` field in both `precpu_stats` and `cpu_stats`. If this field is `nil` then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. ## v1.26 API changes [Docker Engine API v1.26](https://docs.docker.com/engine/api/v1.26/) documentation * `POST /plugins/(plugin name)/upgrade` upgrade a plugin. ## v1.25 API changes [Docker Engine API v1.25](https://docs.docker.com/engine/api/v1.25/) documentation * The API version is now required in all API calls. Instead of just requesting, for example, the URL `/containers/json`, you must now request `/v1.25/containers/json`. * `GET /version` now returns `MinAPIVersion`. * `POST /build` accepts `networkmode` parameter to specify network used during build. * `GET /images/(name)/json` now returns `OsVersion` if populated * `GET /info` now returns `Isolation`. * `POST /containers/create` now takes `AutoRemove` in HostConfig, to enable auto-removal of the container on daemon side when the container's process exits. * `GET /containers/json` and `GET /containers/(id or name)/json` now return `"removing"` as a value for the `State.Status` field if the container is being removed. Previously, "exited" was returned as status. * `GET /containers/json` now accepts `removing` as a valid value for the `status` filter. * `GET /containers/json` now supports filtering containers by `health` status. * `DELETE /volumes/(name)` now accepts a `force` query parameter to force removal of volumes that were already removed out of band by the volume driver plugin. * `POST /containers/create/` and `POST /containers/(name)/update` now validates restart policies. * `POST /containers/create` now validates IPAMConfig in NetworkingConfig, and returns error for invalid IPv4 and IPv6 addresses (`--ip` and `--ip6` in `docker create/run`). * `POST /containers/create` now takes a `Mounts` field in `HostConfig` which replaces `Binds`, `Volumes`, and `Tmpfs`. *note*: `Binds`, `Volumes`, and `Tmpfs` are still available and can be combined with `Mounts`. * `POST /build` now performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. Note that this change is _unversioned_ and applied to all API versions. * `POST /build` accepts `cachefrom` parameter to specify images used for build cache. * `GET /networks/` endpoint now correctly returns a list of *all* networks, instead of the default network if a trailing slash is provided, but no `name` or `id`. * `DELETE /containers/(name)` endpoint now returns an error of `removal of container name is already in progress` with status code of 400, when container name is in a state of removal in progress. * `GET /containers/json` now supports a `is-task` filter to filter containers that are tasks (part of a service in swarm mode). * `POST /containers/create` now takes `StopTimeout` field. * `POST /services/create` and `POST /services/(id or name)/update` now accept `Monitor` and `MaxFailureRatio` parameters, which control the response to failures during service updates. * `POST /services/(id or name)/update` now accepts a `ForceUpdate` parameter inside the `TaskTemplate`, which causes the service to be updated even if there are no changes which would ordinarily trigger an update. * `POST /services/create` and `POST /services/(id or name)/update` now return a `Warnings` array. * `GET /networks/(name)` now returns field `Created` in response to show network created time. * `POST /containers/(id or name)/exec` now accepts an `Env` field, which holds a list of environment variables to be set in the context of the command execution. * `GET /volumes`, `GET /volumes/(name)`, and `POST /volumes/create` now return the `Options` field which holds the driver specific options to use for when creating the volume. * `GET /exec/(id)/json` now returns `Pid`, which is the system pid for the exec'd process. * `POST /containers/prune` prunes stopped containers. * `POST /images/prune` prunes unused images. * `POST /volumes/prune` prunes unused volumes. * `POST /networks/prune` prunes unused networks. * Every API response now includes a `Docker-Experimental` header specifying if experimental features are enabled (value can be `true` or `false`). * Every API response now includes a `API-Version` header specifying the default API version of the server. * The `hostConfig` option now accepts the fields `CpuRealtimePeriod` and `CpuRtRuntime` to allocate cpu runtime to rt tasks when `CONFIG_RT_GROUP_SCHED` is enabled in the kernel. * The `SecurityOptions` field within the `GET /info` response now includes `userns` if user namespaces are enabled in the daemon. * `GET /nodes` and `GET /node/(id or name)` now return `Addr` as part of a node's `Status`, which is the address that that node connects to the manager from. * The `HostConfig` field now includes `NanoCpus` that represents CPU quota in units of 10<sup>-9</sup> CPUs. * `GET /info` now returns more structured information about security options. * The `HostConfig` field now includes `CpuCount` that represents the number of CPUs available for execution by the container. Windows daemon only. * `POST /services/create` and `POST /services/(id or name)/update` now accept the `TTY` parameter, which allocate a pseudo-TTY in container. * `POST /services/create` and `POST /services/(id or name)/update` now accept the `DNSConfig` parameter, which specifies DNS related configurations in resolver configuration file (resolv.conf) through `Nameservers`, `Search`, and `Options`. * `POST /services/create` and `POST /services/(id or name)/update` now support `node.platform.arch` and `node.platform.os` constraints in the services `TaskSpec.Placement.Constraints` field. * `GET /networks/(id or name)` now includes IP and name of all peers nodes for swarm mode overlay networks. * `GET /plugins` list plugins. * `POST /plugins/pull?name=<plugin name>` pulls a plugin. * `GET /plugins/(plugin name)` inspect a plugin. * `POST /plugins/(plugin name)/set` configure a plugin. * `POST /plugins/(plugin name)/enable` enable a plugin. * `POST /plugins/(plugin name)/disable` disable a plugin. * `POST /plugins/(plugin name)/push` push a plugin. * `POST /plugins/create?name=(plugin name)` create a plugin. * `DELETE /plugins/(plugin name)` delete a plugin. * `POST /node/(id or name)/update` now accepts both `id` or `name` to identify the node to update. * `GET /images/json` now support a `reference` filter. * `GET /secrets` returns information on the secrets. * `POST /secrets/create` creates a secret. * `DELETE /secrets/{id}` removes the secret `id`. * `GET /secrets/{id}` returns information on the secret `id`. * `POST /secrets/{id}/update` updates the secret `id`. * `POST /services/(id or name)/update` now accepts service name or prefix of service id as a parameter. * `POST /containers/create` added 2 built-in log-opts that work on all logging drivers, `mode` (`blocking`|`non-blocking`), and `max-buffer-size` (e.g. `2m`) which enables a non-blocking log buffer. * `POST /containers/create` now takes `HostConfig.Init` field to run an init inside the container that forwards signals and reaps processes. ## v1.24 API changes [Docker Engine API v1.24](v1.24.md) documentation * `POST /containers/create` now takes `StorageOpt` field. * `GET /info` now returns `SecurityOptions` field, showing if `apparmor`, `seccomp`, or `selinux` is supported. * `GET /info` no longer returns the `ExecutionDriver` property. This property was no longer used after integration with ContainerD in Docker 1.11. * `GET /networks` now supports filtering by `label` and `driver`. * `GET /containers/json` now supports filtering containers by `network` name or id. * `POST /containers/create` now takes `IOMaximumBandwidth` and `IOMaximumIOps` fields. Windows daemon only. * `POST /containers/create` now returns an HTTP 400 "bad parameter" message if no command is specified (instead of an HTTP 500 "server error") * `GET /images/search` now takes a `filters` query parameter. * `GET /events` now supports a `reload` event that is emitted when the daemon configuration is reloaded. * `GET /events` now supports filtering by daemon name or ID. * `GET /events` now supports a `detach` event that is emitted on detaching from container process. * `GET /events` now supports an `exec_detach ` event that is emitted on detaching from exec process. * `GET /images/json` now supports filters `since` and `before`. * `POST /containers/(id or name)/start` no longer accepts a `HostConfig`. * `POST /images/(name)/tag` no longer has a `force` query parameter. * `GET /images/search` now supports maximum returned search results `limit`. * `POST /containers/{name:.*}/copy` is now removed and errors out starting from this API version. * API errors are now returned as JSON instead of plain text. * `POST /containers/create` and `POST /containers/(id)/start` allow you to configure kernel parameters (sysctls) for use in the container. * `POST /containers/<container ID>/exec` and `POST /exec/<exec ID>/start` no longer expects a "Container" field to be present. This property was not used and is no longer sent by the docker client. * `POST /containers/create/` now validates the hostname (should be a valid RFC 1123 hostname). * `POST /containers/create/` `HostConfig.PidMode` field now accepts `container:<name|id>`, to have the container join the PID namespace of an existing container. ## v1.23 API changes [Docker Engine API v1.23](v1.23.md) documentation * `GET /containers/json` returns the state of the container, one of `created`, `restarting`, `running`, `paused`, `exited` or `dead`. * `GET /containers/json` returns the mount points for the container. * `GET /networks/(name)` now returns an `Internal` field showing whether the network is internal or not. * `GET /networks/(name)` now returns an `EnableIPv6` field showing whether the network has ipv6 enabled or not. * `POST /containers/(name)/update` now supports updating container's restart policy. * `POST /networks/create` now supports enabling ipv6 on the network by setting the `EnableIPv6` field (doing this with a label will no longer work). * `GET /info` now returns `CgroupDriver` field showing what cgroup driver the daemon is using; `cgroupfs` or `systemd`. * `GET /info` now returns `KernelMemory` field, showing if "kernel memory limit" is supported. * `POST /containers/create` now takes `PidsLimit` field, if the kernel is >= 4.3 and the pids cgroup is supported. * `GET /containers/(id or name)/stats` now returns `pids_stats`, if the kernel is >= 4.3 and the pids cgroup is supported. * `POST /containers/create` now allows you to override usernamespaces remapping and use privileged options for the container. * `POST /containers/create` now allows specifying `nocopy` for named volumes, which disables automatic copying from the container path to the volume. * `POST /auth` now returns an `IdentityToken` when supported by a registry. * `POST /containers/create` with both `Hostname` and `Domainname` fields specified will result in the container's hostname being set to `Hostname`, rather than `Hostname.Domainname`. * `GET /volumes` now supports more filters, new added filters are `name` and `driver`. * `GET /containers/(id or name)/logs` now accepts a `details` query parameter to stream the extra attributes that were provided to the containers `LogOpts`, such as environment variables and labels, with the logs. * `POST /images/load` now returns progress information as a JSON stream, and has a `quiet` query parameter to suppress progress details. ## v1.22 API changes [Docker Engine API v1.22](v1.22.md) documentation * `POST /container/(name)/update` updates the resources of a container. * `GET /containers/json` supports filter `isolation` on Windows. * `GET /containers/json` now returns the list of networks of containers. * `GET /info` Now returns `Architecture` and `OSType` fields, providing information about the host architecture and operating system type that the daemon runs on. * `GET /networks/(name)` now returns a `Name` field for each container attached to the network. * `GET /version` now returns the `BuildTime` field in RFC3339Nano format to make it consistent with other date/time values returned by the API. * `AuthConfig` now supports a `registrytoken` for token based authentication * `POST /containers/create` now has a 4M minimum value limit for `HostConfig.KernelMemory` * Pushes initiated with `POST /images/(name)/push` and pulls initiated with `POST /images/create` will be cancelled if the HTTP connection making the API request is closed before the push or pull completes. * `POST /containers/create` now allows you to set a read/write rate limit for a device (in bytes per second or IO per second). * `GET /networks` now supports filtering by `name`, `id` and `type`. * `POST /containers/create` now allows you to set the static IPv4 and/or IPv6 address for the container. * `POST /networks/(id)/connect` now allows you to set the static IPv4 and/or IPv6 address for the container. * `GET /info` now includes the number of containers running, stopped, and paused. * `POST /networks/create` now supports restricting external access to the network by setting the `Internal` field. * `POST /networks/(id)/disconnect` now includes a `Force` option to forcefully disconnect a container from network * `GET /containers/(id)/json` now returns the `NetworkID` of containers. * `POST /networks/create` Now supports an options field in the IPAM config that provides options for custom IPAM plugins. * `GET /networks/{network-id}` Now returns IPAM config options for custom IPAM plugins if any are available. * `GET /networks/<network-id>` now returns subnets info for user-defined networks. * `GET /info` can now return a `SystemStatus` field useful for returning additional information about applications that are built on top of engine. ## v1.21 API changes [Docker Engine API v1.21](v1.21.md) documentation * `GET /volumes` lists volumes from all volume drivers. * `POST /volumes/create` to create a volume. * `GET /volumes/(name)` get low-level information about a volume. * `DELETE /volumes/(name)` remove a volume with the specified name. * `VolumeDriver` was moved from `config` to `HostConfig` to make the configuration portable. * `GET /images/(name)/json` now returns information about an image's `RepoTags` and `RepoDigests`. * The `config` option now accepts the field `StopSignal`, which specifies the signal to use to kill a container. * `GET /containers/(id)/stats` will return networking information respectively for each interface. * The `HostConfig` option now includes the `DnsOptions` field to configure the container's DNS options. * `POST /build` now optionally takes a serialized map of build-time variables. * `GET /events` now includes a `timenano` field, in addition to the existing `time` field. * `GET /events` now supports filtering by image and container labels. * `GET /info` now lists engine version information and return the information of `CPUShares` and `Cpuset`. * `GET /containers/json` will return `ImageID` of the image used by container. * `POST /exec/(name)/start` will now return an HTTP 409 when the container is either stopped or paused. * `POST /containers/create` now takes `KernelMemory` in HostConfig to specify kernel memory limit. * `GET /containers/(name)/json` now accepts a `size` parameter. Setting this parameter to '1' returns container size information in the `SizeRw` and `SizeRootFs` fields. * `GET /containers/(name)/json` now returns a `NetworkSettings.Networks` field, detailing network settings per network. This field deprecates the `NetworkSettings.Gateway`, `NetworkSettings.IPAddress`, `NetworkSettings.IPPrefixLen`, and `NetworkSettings.MacAddress` fields, which are still returned for backward-compatibility, but will be removed in a future version. * `GET /exec/(id)/json` now returns a `NetworkSettings.Networks` field, detailing networksettings per network. This field deprecates the `NetworkSettings.Gateway`, `NetworkSettings.IPAddress`, `NetworkSettings.IPPrefixLen`, and `NetworkSettings.MacAddress` fields, which are still returned for backward-compatibility, but will be removed in a future version. * The `HostConfig` option now includes the `OomScoreAdj` field for adjusting the badness heuristic. This heuristic selects which processes the OOM killer kills under out-of-memory conditions. ## v1.20 API changes [Docker Engine API v1.20](v1.20.md) documentation * `GET /containers/(id)/archive` get an archive of filesystem content from a container. * `PUT /containers/(id)/archive` upload an archive of content to be extracted to an existing directory inside a container's filesystem. * `POST /containers/(id)/copy` is deprecated in favor of the above `archive` endpoint which can be used to download files and directories from a container. * The `hostConfig` option now accepts the field `GroupAdd`, which specifies a list of additional groups that the container process will run as. ## v1.19 API changes [Docker Engine API v1.19](v1.19.md) documentation * When the daemon detects a version mismatch with the client, usually when the client is newer than the daemon, an HTTP 400 is now returned instead of a 404. * `GET /containers/(id)/stats` now accepts `stream` bool to get only one set of stats and disconnect. * `GET /containers/(id)/logs` now accepts a `since` timestamp parameter. * `GET /info` The fields `Debug`, `IPv4Forwarding`, `MemoryLimit`, and `SwapLimit` are now returned as boolean instead of as an int. In addition, the end point now returns the new boolean fields `CpuCfsPeriod`, `CpuCfsQuota`, and `OomKillDisable`. * The `hostConfig` option now accepts the fields `CpuPeriod` and `CpuQuota` * `POST /build` accepts `cpuperiod` and `cpuquota` options ## v1.18 API changes [Docker Engine API v1.18](v1.18.md) documentation * `GET /version` now returns `Os`, `Arch` and `KernelVersion`. * `POST /containers/create` and `POST /containers/(id)/start`allow you to set ulimit settings for use in the container. * `GET /info` now returns `SystemTime`, `HttpProxy`,`HttpsProxy` and `NoProxy`. * `GET /images/json` added a `RepoDigests` field to include image digest information. * `POST /build` can now set resource constraints for all containers created for the build. * `CgroupParent` can be passed in the host config to setup container cgroups under a specific cgroup. * `POST /build` closing the HTTP request cancels the build * `POST /containers/(id)/exec` includes `Warnings` field to response.
rvolosatovs
c81abefdb1f907bbc5f5b8b1b1fba942821ae5b3
bf78e25fe508c99e12459a3510f27448de8aaefd
"Use field BuildCache instead to track storage used by the builder component."
tonistiigi
4,569
moby/moby
42,607
Ensure empty build cache is represented as empty JSON array
Closes #42605 Refs https://github.com/moby/moby/pull/42605#discussion_r666055470 <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Ensure empty `BuildCache` field is represented as empty JSON array(`[]`) instead of `null` to be consistent with `Images`, `Containers` etc. **- How I did it** Initialize the struct field with an empty slice if `buildCache` is `nil`. Note, the empty slice could be potentially defined as global variable and reused to avoid unnecessary allocations, but that is an optimization not worth doing here in my opinion **- How to verify it** `curl -s --unix-socket /var/run/docker.sock http://localhost/system/df | jq` on empty daemon should output: ```json { "LayersSize": 0, "Images": [], "Containers": [], "Volumes": [], "BuildCache": [], "BuilderSize": 0 } ``` **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-08 11:20:46+00:00
2021-07-09 13:01:37+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
rvolosatovs
5e4da6cc8269c9b766421f22f5824f3e23c89e76
c81abefdb1f907bbc5f5b8b1b1fba942821ae5b3
Arf, my bad; I didn't remove the trailing commas when I copy/pasta'd the example from JSON; ``` code. [2021-07-08T21:22:49.541Z] api/swagger.yaml [2021-07-08T21:22:49.541Z] 8344:50 error syntax error: expected <block end>, but found ',' (syntax) [2021-07-08T21:22:49.541Z] script returned exit code 1
thaJeztah
4,570
moby/moby
42,607
Ensure empty build cache is represented as empty JSON array
Closes #42605 Refs https://github.com/moby/moby/pull/42605#discussion_r666055470 <!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/ If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> **- What I did** Ensure empty `BuildCache` field is represented as empty JSON array(`[]`) instead of `null` to be consistent with `Images`, `Containers` etc. **- How I did it** Initialize the struct field with an empty slice if `buildCache` is `nil`. Note, the empty slice could be potentially defined as global variable and reused to avoid unnecessary allocations, but that is an optimization not worth doing here in my opinion **- How to verify it** `curl -s --unix-socket /var/run/docker.sock http://localhost/system/df | jq` on empty daemon should output: ```json { "LayersSize": 0, "Images": [], "Containers": [], "Volumes": [], "BuildCache": [], "BuilderSize": 0 } ``` **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-08 11:20:46+00:00
2021-07-09 13:01:37+00:00
api/swagger.yaml
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API. # # This is used for generating API documentation and the types used by the # client/server. See api/README.md for more information. # # Some style notes: # - This file is used by ReDoc, which allows GitHub Flavored Markdown in # descriptions. # - There is no maximum line length, for ease of editing and pretty diffs. # - operationIds are in the format "NounVerb", with a singular noun. swagger: "2.0" schemes: - "http" - "https" produces: - "application/json" - "text/plain" consumes: - "application/json" - "text/plain" basePath: "/v1.42" info: title: "Docker Engine API" version: "1.42" x-logo: url: "https://docs.docker.com/images/logo-docker-main.png" description: | The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. Most of the client's commands map directly to API endpoints (e.g. `docker ps` is `GET /containers/json`). The notable exception is running containers, which consists of several API calls. # Errors The API uses standard HTTP status codes to indicate the success or failure of the API call. The body of the response will be JSON in the following format: ``` { "message": "page not found" } ``` # Versioning The API is usually changed in each release, so API calls are versioned to ensure that clients don't break. To lock to a specific version of the API, you prefix the URL with its version, for example, call `/v1.30/info` to use the v1.30 version of the `/info` endpoint. If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned. If you omit the version-prefix, the current version of the API (v1.42) is used. For example, calling `/info` is the same as calling `/v1.42/info`. Using the API without a version-prefix is deprecated and will be removed in a future release. Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine. The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer daemons. # Authentication Authentication for registries is handled client side. The client has to send authentication details to various endpoints that need to communicate with registries, such as `POST /images/(name)/push`. These are sent as `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5) (JSON) string with the following structure: ``` { "username": "string", "password": "string", "email": "string", "serveraddress": "string" } ``` The `serveraddress` is a domain/IP without a protocol. Throughout this structure, double quotes are required. If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth), you can just pass this instead of credentials: ``` { "identitytoken": "9cbaf023786cd7..." } ``` # The tags on paths define the menu sections in the ReDoc documentation, so # the usage of tags must make sense for that: # - They should be singular, not plural. # - There should not be too many tags, or the menu becomes unwieldy. For # example, it is preferable to add a path to the "System" tag instead of # creating a tag with a single path in it. # - The order of tags in this list defines the order in the menu. tags: # Primary objects - name: "Container" x-displayName: "Containers" description: | Create and manage containers. - name: "Image" x-displayName: "Images" - name: "Network" x-displayName: "Networks" description: | Networks are user-defined networks that containers can be attached to. See the [networking documentation](https://docs.docker.com/network/) for more information. - name: "Volume" x-displayName: "Volumes" description: | Create and manage persistent storage that can be attached to containers. - name: "Exec" x-displayName: "Exec" description: | Run new commands inside running containers. Refer to the [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/) for more information. To exec a command in a container, you first need to create an exec instance, then start it. These two API endpoints are wrapped up in a single command-line command, `docker exec`. # Swarm things - name: "Swarm" x-displayName: "Swarm" description: | Engines can be clustered together in a swarm. Refer to the [swarm mode documentation](https://docs.docker.com/engine/swarm/) for more information. - name: "Node" x-displayName: "Nodes" description: | Nodes are instances of the Engine participating in a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Service" x-displayName: "Services" description: | Services are the definitions of tasks to run on a swarm. Swarm mode must be enabled for these endpoints to work. - name: "Task" x-displayName: "Tasks" description: | A task is a container running on a swarm. It is the atomic scheduling unit of swarm. Swarm mode must be enabled for these endpoints to work. - name: "Secret" x-displayName: "Secrets" description: | Secrets are sensitive data that can be used by services. Swarm mode must be enabled for these endpoints to work. - name: "Config" x-displayName: "Configs" description: | Configs are application configurations that can be used by services. Swarm mode must be enabled for these endpoints to work. # System things - name: "Plugin" x-displayName: "Plugins" - name: "System" x-displayName: "System" definitions: Port: type: "object" description: "An open port on a container" required: [PrivatePort, Type] properties: IP: type: "string" format: "ip-address" description: "Host IP address that the container's port is mapped to" PrivatePort: type: "integer" format: "uint16" x-nullable: false description: "Port on the container" PublicPort: type: "integer" format: "uint16" description: "Port exposed on the host" Type: type: "string" x-nullable: false enum: ["tcp", "udp", "sctp"] example: PrivatePort: 8080 PublicPort: 80 Type: "tcp" MountPoint: type: "object" description: "A mount point inside a container" properties: Type: type: "string" Name: type: "string" Source: type: "string" Destination: type: "string" Driver: type: "string" Mode: type: "string" RW: type: "boolean" Propagation: type: "string" DeviceMapping: type: "object" description: "A device mapping between the host and container" properties: PathOnHost: type: "string" PathInContainer: type: "string" CgroupPermissions: type: "string" example: PathOnHost: "/dev/deviceName" PathInContainer: "/dev/deviceName" CgroupPermissions: "mrw" DeviceRequest: type: "object" description: "A request for devices to be sent to device drivers" properties: Driver: type: "string" example: "nvidia" Count: type: "integer" example: -1 DeviceIDs: type: "array" items: type: "string" example: - "0" - "1" - "GPU-fef8089b-4820-abfc-e83e-94318197576e" Capabilities: description: | A list of capabilities; an OR list of AND lists of capabilities. type: "array" items: type: "array" items: type: "string" example: # gpu AND nvidia AND compute - ["gpu", "nvidia", "compute"] Options: description: | Driver-specific options, specified as a key/value pairs. These options are passed directly to the driver. type: "object" additionalProperties: type: "string" ThrottleDevice: type: "object" properties: Path: description: "Device path" type: "string" Rate: description: "Rate" type: "integer" format: "int64" minimum: 0 Mount: type: "object" properties: Target: description: "Container path." type: "string" Source: description: "Mount source (e.g. a volume name, a host path)." type: "string" Type: description: | The mount type. Available types: - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container. - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed. - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs. - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container. type: "string" enum: - "bind" - "volume" - "tmpfs" - "npipe" ReadOnly: description: "Whether the mount should be read-only." type: "boolean" Consistency: description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`." type: "string" BindOptions: description: "Optional configuration for the `bind` type." type: "object" properties: Propagation: description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`." type: "string" enum: - "private" - "rprivate" - "shared" - "rshared" - "slave" - "rslave" NonRecursive: description: "Disable recursive bind mount." type: "boolean" default: false VolumeOptions: description: "Optional configuration for the `volume` type." type: "object" properties: NoCopy: description: "Populate volume with data from the target." type: "boolean" default: false Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" DriverConfig: description: "Map of driver specific options" type: "object" properties: Name: description: "Name of the driver to use to create the volume." type: "string" Options: description: "key/value map of driver specific options." type: "object" additionalProperties: type: "string" TmpfsOptions: description: "Optional configuration for the `tmpfs` type." type: "object" properties: SizeBytes: description: "The size for the tmpfs mount in bytes." type: "integer" format: "int64" Mode: description: "The permission mode for the tmpfs mount in an integer." type: "integer" RestartPolicy: description: | The behavior to apply when the container exits. The default is not to restart. An ever increasing delay (double the previous delay, starting at 100ms) is added before each restart to prevent flooding the server. type: "object" properties: Name: type: "string" description: | - Empty string means not to restart - `always` Always restart - `unless-stopped` Restart always except when the user has manually stopped the container - `on-failure` Restart only when the container exit code is non-zero enum: - "" - "always" - "unless-stopped" - "on-failure" MaximumRetryCount: type: "integer" description: | If `on-failure` is used, the number of times to retry before giving up. Resources: description: "A container's resources (cgroups config, ulimits, etc)" type: "object" properties: # Applicable to all platforms CpuShares: description: | An integer value representing this container's relative CPU weight versus other containers. type: "integer" Memory: description: "Memory limit in bytes." type: "integer" format: "int64" default: 0 # Applicable to UNIX platforms CgroupParent: description: | Path to `cgroups` under which the container's `cgroup` is created. If the path is not absolute, the path is considered to be relative to the `cgroups` path of the init process. Cgroups are created if they do not already exist. type: "string" BlkioWeight: description: "Block IO weight (relative weight)." type: "integer" minimum: 0 maximum: 1000 BlkioWeightDevice: description: | Block IO weight (relative device weight) in the form: ``` [{"Path": "device_path", "Weight": weight}] ``` type: "array" items: type: "object" properties: Path: type: "string" Weight: type: "integer" minimum: 0 BlkioDeviceReadBps: description: | Limit read rate (bytes per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteBps: description: | Limit write rate (bytes per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceReadIOps: description: | Limit read rate (IO per second) from a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" BlkioDeviceWriteIOps: description: | Limit write rate (IO per second) to a device, in the form: ``` [{"Path": "device_path", "Rate": rate}] ``` type: "array" items: $ref: "#/definitions/ThrottleDevice" CpuPeriod: description: "The length of a CPU period in microseconds." type: "integer" format: "int64" CpuQuota: description: | Microseconds of CPU time that the container can get in a CPU period. type: "integer" format: "int64" CpuRealtimePeriod: description: | The length of a CPU real-time period in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpuRealtimeRuntime: description: | The length of a CPU real-time runtime in microseconds. Set to 0 to allocate no time allocated to real-time tasks. type: "integer" format: "int64" CpusetCpus: description: | CPUs in which to allow execution (e.g., `0-3`, `0,1`). type: "string" example: "0-3" CpusetMems: description: | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. type: "string" Devices: description: "A list of devices to add to the container." type: "array" items: $ref: "#/definitions/DeviceMapping" DeviceCgroupRules: description: "a list of cgroup rules to apply to the container" type: "array" items: type: "string" example: "c 13:* rwm" DeviceRequests: description: | A list of requests for devices to be sent to device drivers. type: "array" items: $ref: "#/definitions/DeviceRequest" KernelMemory: description: | Kernel memory limit in bytes. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "integer" format: "int64" example: 209715200 KernelMemoryTCP: description: "Hard limit for kernel TCP buffer memory (in bytes)." type: "integer" format: "int64" MemoryReservation: description: "Memory soft limit in bytes." type: "integer" format: "int64" MemorySwap: description: | Total memory limit (memory + swap). Set as `-1` to enable unlimited swap. type: "integer" format: "int64" MemorySwappiness: description: | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. type: "integer" format: "int64" minimum: 0 maximum: 100 NanoCpus: description: "CPU quota in units of 10<sup>-9</sup> CPUs." type: "integer" format: "int64" OomKillDisable: description: "Disable OOM Killer for the container." type: "boolean" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true PidsLimit: description: | Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null` to not change. type: "integer" format: "int64" x-nullable: true Ulimits: description: | A list of resource limits to set in the container. For example: ``` {"Name": "nofile", "Soft": 1024, "Hard": 2048} ``` type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" # Applicable to Windows CpuCount: description: | The number of usable CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" CpuPercent: description: | The usable percentage of the available CPUs (Windows only). On Windows Server containers, the processor resource controls are mutually exclusive. The order of precedence is `CPUCount` first, then `CPUShares`, and `CPUPercent` last. type: "integer" format: "int64" IOMaximumIOps: description: "Maximum IOps for the container system drive (Windows only)" type: "integer" format: "int64" IOMaximumBandwidth: description: | Maximum IO in bytes per second for the container system drive (Windows only). type: "integer" format: "int64" Limit: description: | An object describing a limit on resources which can be requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 Pids: description: | Limits the maximum number of PIDs in the container. Set `0` for unlimited. type: "integer" format: "int64" default: 0 example: 100 ResourceObject: description: | An object describing the resources which can be advertised by a node and requested by a task. type: "object" properties: NanoCPUs: type: "integer" format: "int64" example: 4000000000 MemoryBytes: type: "integer" format: "int64" example: 8272408576 GenericResources: $ref: "#/definitions/GenericResources" GenericResources: description: | User-defined resources can be either Integer resources (e.g, `SSD=3`) or String resources (e.g, `GPU=UUID1`). type: "array" items: type: "object" properties: NamedResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "string" DiscreteResourceSpec: type: "object" properties: Kind: type: "string" Value: type: "integer" format: "int64" example: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" HealthConfig: description: "A test to perform to check that the container is healthy." type: "object" properties: Test: description: | The test to perform. Possible values are: - `[]` inherit healthcheck from image or parent image - `["NONE"]` disable healthcheck - `["CMD", args...]` exec arguments directly - `["CMD-SHELL", command]` run command with system's default shell type: "array" items: type: "string" Interval: description: | The time to wait between checks in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Timeout: description: | The time to wait before considering the check to have hung. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Retries: description: | The number of consecutive failures needed to consider a container as unhealthy. 0 means inherit. type: "integer" StartPeriod: description: | Start period for the container to initialize before starting health-retries countdown in nanoseconds. It should be 0 or at least 1000000 (1 ms). 0 means inherit. type: "integer" Health: description: | Health stores information about the container's healthcheck results. type: "object" properties: Status: description: | Status is one of `none`, `starting`, `healthy` or `unhealthy` - "none" Indicates there is no healthcheck - "starting" Starting indicates that the container is not yet ready - "healthy" Healthy indicates that the container is running correctly - "unhealthy" Unhealthy indicates that the container has a problem type: "string" enum: - "none" - "starting" - "healthy" - "unhealthy" example: "healthy" FailingStreak: description: "FailingStreak is the number of consecutive failures" type: "integer" example: 0 Log: type: "array" description: | Log contains the last few results (oldest first) items: x-nullable: true $ref: "#/definitions/HealthcheckResult" HealthcheckResult: description: | HealthcheckResult stores information about a single run of a healthcheck probe type: "object" properties: Start: description: | Date and time at which this check started in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "date-time" example: "2020-01-04T10:44:24.496525531Z" End: description: | Date and time at which this check ended in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2020-01-04T10:45:21.364524523Z" ExitCode: description: | ExitCode meanings: - `0` healthy - `1` unhealthy - `2` reserved (considered unhealthy) - other values: error running probe type: "integer" example: 0 Output: description: "Output from last check" type: "string" HostConfig: description: "Container configuration that depends on the host we are running on" allOf: - $ref: "#/definitions/Resources" - type: "object" properties: # Applicable to all platforms Binds: type: "array" description: | A list of volume bindings for this container. Each volume binding is a string in one of these forms: - `host-src:container-dest[:options]` to bind-mount a host path into the container. Both `host-src`, and `container-dest` must be an _absolute_ path. - `volume-name:container-dest[:options]` to bind-mount a volume managed by a volume driver into the container. `container-dest` must be an _absolute_ path. `options` is an optional, comma-delimited list of: - `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes. - `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write. - `[z|Z]` applies SELinux labels to allow or deny multiple containers to read and write to the same volume. - `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing. - `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified. - `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`. items: type: "string" ContainerIDFile: type: "string" description: "Path to a file where the container ID is written" LogConfig: type: "object" description: "The logging configuration for this container" properties: Type: type: "string" enum: - "json-file" - "syslog" - "journald" - "gelf" - "fluentd" - "awslogs" - "splunk" - "etwlogs" - "none" Config: type: "object" additionalProperties: type: "string" NetworkMode: type: "string" description: | Network mode to use for this container. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name to which this container should connect to. PortBindings: $ref: "#/definitions/PortMap" RestartPolicy: $ref: "#/definitions/RestartPolicy" AutoRemove: type: "boolean" description: | Automatically remove the container when the container's process exits. This has no effect if `RestartPolicy` is set. VolumeDriver: type: "string" description: "Driver that this container uses to mount volumes." VolumesFrom: type: "array" description: | A list of volumes to inherit from another container, specified in the form `<container name>[:<ro|rw>]`. items: type: "string" Mounts: description: | Specification for mounts to be added to the container. type: "array" items: $ref: "#/definitions/Mount" # Applicable to UNIX platforms CapAdd: type: "array" description: | A list of kernel capabilities to add to the container. Conflicts with option 'Capabilities'. items: type: "string" CapDrop: type: "array" description: | A list of kernel capabilities to drop from the container. Conflicts with option 'Capabilities'. items: type: "string" CgroupnsMode: type: "string" enum: - "private" - "host" description: | cgroup namespace mode for the container. Possible values are: - `"private"`: the container runs in its own private cgroup namespace - `"host"`: use the host system's cgroup namespace If not specified, the daemon default is used, which can either be `"private"` or `"host"`, depending on daemon version, kernel support and configuration. Dns: type: "array" description: "A list of DNS servers for the container to use." items: type: "string" DnsOptions: type: "array" description: "A list of DNS options." items: type: "string" DnsSearch: type: "array" description: "A list of DNS search domains." items: type: "string" ExtraHosts: type: "array" description: | A list of hostnames/IP mappings to add to the container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. items: type: "string" GroupAdd: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" IpcMode: type: "string" description: | IPC sharing mode for the container. Possible values are: - `"none"`: own private IPC namespace, with /dev/shm not mounted - `"private"`: own private IPC namespace - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers - `"container:<name|id>"`: join another (shareable) container's IPC namespace - `"host"`: use the host system's IPC namespace If not specified, daemon default is used, which can either be `"private"` or `"shareable"`, depending on daemon version and configuration. Cgroup: type: "string" description: "Cgroup to use for the container." Links: type: "array" description: | A list of links for the container in the form `container_name:alias`. items: type: "string" OomScoreAdj: type: "integer" description: | An integer value containing the score given to the container in order to tune OOM killer preferences. example: 500 PidMode: type: "string" description: | Set the PID (Process) Namespace mode for the container. It can be either: - `"container:<name|id>"`: joins another container's PID namespace - `"host"`: use the host's PID namespace inside the container Privileged: type: "boolean" description: "Gives the container full access to the host." PublishAllPorts: type: "boolean" description: | Allocates an ephemeral host port for all of a container's exposed ports. Ports are de-allocated when the container stops and allocated when the container starts. The allocated port might be changed when restarting the container. The port is selected from the ephemeral port range that depends on the kernel. For example, on Linux the range is defined by `/proc/sys/net/ipv4/ip_local_port_range`. ReadonlyRootfs: type: "boolean" description: "Mount the container's root filesystem as read only." SecurityOpt: type: "array" description: "A list of string values to customize labels for MLS systems, such as SELinux." items: type: "string" StorageOpt: type: "object" description: | Storage driver options for this container, in the form `{"size": "120G"}`. additionalProperties: type: "string" Tmpfs: type: "object" description: | A map of container directories which should be replaced by tmpfs mounts, and their corresponding mount options. For example: ``` { "/run": "rw,noexec,nosuid,size=65536k" } ``` additionalProperties: type: "string" UTSMode: type: "string" description: "UTS namespace to use for the container." UsernsMode: type: "string" description: | Sets the usernamespace mode for the container when usernamespace remapping option is enabled. ShmSize: type: "integer" description: | Size of `/dev/shm` in bytes. If omitted, the system uses 64MB. minimum: 0 Sysctls: type: "object" description: | A list of kernel parameters (sysctls) to set in the container. For example: ``` {"net.ipv4.ip_forward": "1"} ``` additionalProperties: type: "string" Runtime: type: "string" description: "Runtime to use with this container." # Applicable to Windows ConsoleSize: type: "array" description: | Initial console size, as an `[height, width]` array. (Windows only) minItems: 2 maxItems: 2 items: type: "integer" minimum: 0 Isolation: type: "string" description: | Isolation technology of the container. (Windows only) enum: - "default" - "process" - "hyperv" MaskedPaths: type: "array" description: | The list of paths to be masked inside the container (this overrides the default set of paths). items: type: "string" ReadonlyPaths: type: "array" description: | The list of paths to be set as read-only inside the container (this overrides the default set of paths). items: type: "string" ContainerConfig: description: "Configuration for a container that is portable between hosts" type: "object" properties: Hostname: description: "The hostname to use for the container, as a valid RFC 1123 hostname." type: "string" Domainname: description: "The domain name to use for the container." type: "string" User: description: "The user that commands are run as inside the container." type: "string" AttachStdin: description: "Whether to attach to `stdin`." type: "boolean" default: false AttachStdout: description: "Whether to attach to `stdout`." type: "boolean" default: true AttachStderr: description: "Whether to attach to `stderr`." type: "boolean" default: true ExposedPorts: description: | An object mapping ports to an empty object in the form: `{"<port>/<tcp|udp|sctp>": {}}` type: "object" additionalProperties: type: "object" enum: - {} default: {} Tty: description: | Attach standard streams to a TTY, including `stdin` if it is not closed. type: "boolean" default: false OpenStdin: description: "Open `stdin`" type: "boolean" default: false StdinOnce: description: "Close `stdin` after one attached client disconnects" type: "boolean" default: false Env: description: | A list of environment variables to set inside the container in the form `["VAR=value", ...]`. A variable without `=` is removed from the environment, rather than to have an empty value. type: "array" items: type: "string" Cmd: description: | Command to run specified as a string or an array of strings. type: "array" items: type: "string" Healthcheck: $ref: "#/definitions/HealthConfig" ArgsEscaped: description: "Command is already escaped (Windows only)" type: "boolean" Image: description: | The name of the image to use when creating the container/ type: "string" Volumes: description: | An object mapping mount point paths inside the container to empty objects. type: "object" additionalProperties: type: "object" enum: - {} default: {} WorkingDir: description: "The working directory for commands to run in." type: "string" Entrypoint: description: | The entry point for the container as a string or an array of strings. If the array consists of exactly one empty string (`[""]`) then the entry point is reset to system default (i.e., the entry point used by docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`). type: "array" items: type: "string" NetworkDisabled: description: "Disable networking for the container." type: "boolean" MacAddress: description: "MAC address of the container." type: "string" OnBuild: description: | `ONBUILD` metadata that were defined in the image's `Dockerfile`. type: "array" items: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" StopSignal: description: | Signal to stop a container as a string or unsigned integer. type: "string" default: "SIGTERM" StopTimeout: description: "Timeout to stop a container in seconds." type: "integer" default: 10 Shell: description: | Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell. type: "array" items: type: "string" NetworkingConfig: description: | NetworkingConfig represents the container's networking configuration for each of its interfaces. It is used for the networking configs specified in the `docker create` and `docker network connect` commands. type: "object" properties: EndpointsConfig: description: | A mapping of network name to endpoint configuration for that network. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" example: # putting an example here, instead of using the example values from # /definitions/EndpointSettings, because containers/create currently # does not support attaching to multiple networks, so the example request # would be confusing if it showed that multiple networks can be contained # in the EndpointsConfig. # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323) EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" NetworkSettings: description: "NetworkSettings exposes the network settings in the API" type: "object" properties: Bridge: description: Name of the network's bridge (for example, `docker0`). type: "string" example: "docker0" SandboxID: description: SandboxID uniquely represents a container's network stack. type: "string" example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3" HairpinMode: description: | Indicates if hairpin NAT should be enabled on the virtual interface. type: "boolean" example: false LinkLocalIPv6Address: description: IPv6 unicast address using the link-local prefix. type: "string" example: "fe80::42:acff:fe11:1" LinkLocalIPv6PrefixLen: description: Prefix length of the IPv6 unicast address. type: "integer" example: "64" Ports: $ref: "#/definitions/PortMap" SandboxKey: description: SandboxKey identifies the sandbox type: "string" example: "/var/run/docker/netns/8ab54b426c38" # TODO is SecondaryIPAddresses actually used? SecondaryIPAddresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO is SecondaryIPv6Addresses actually used? SecondaryIPv6Addresses: description: "" type: "array" items: $ref: "#/definitions/Address" x-nullable: true # TODO properties below are part of DefaultNetworkSettings, which is # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12 EndpointID: description: | EndpointID uniquely represents a service endpoint in a Sandbox. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.1" GlobalIPv6Address: description: | Global IPv6 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 64 IPAddress: description: | IPv4 address for the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address for this network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "2001:db8:2::100" MacAddress: description: | MAC address for the container on the default "bridge" network. <p><br /></p> > **Deprecated**: This field is only propagated when attached to the > default "bridge" network. Use the information from the "bridge" > network inside the `Networks` map instead, which contains the same > information. This field was deprecated in Docker 1.9 and is scheduled > to be removed in Docker 17.12.0 type: "string" example: "02:42:ac:11:00:04" Networks: description: | Information about all networks that the container is connected to. type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Address: description: Address represents an IPv4 or IPv6 IP address. type: "object" properties: Addr: description: IP address. type: "string" PrefixLen: description: Mask length of the IP address. type: "integer" PortMap: description: | PortMap describes the mapping of container ports to host ports, using the container's port-number and protocol as key in the format `<port>/<protocol>`, for example, `80/udp`. If a container's port is mapped for multiple protocols, separate entries are added to the mapping table. type: "object" additionalProperties: type: "array" x-nullable: true items: $ref: "#/definitions/PortBinding" example: "443/tcp": - HostIp: "127.0.0.1" HostPort: "4443" "80/tcp": - HostIp: "0.0.0.0" HostPort: "80" - HostIp: "0.0.0.0" HostPort: "8080" "80/udp": - HostIp: "0.0.0.0" HostPort: "80" "53/udp": - HostIp: "0.0.0.0" HostPort: "53" "2377/tcp": null PortBinding: description: | PortBinding represents a binding between a host IP address and a host port. type: "object" properties: HostIp: description: "Host IP address that the container's port is mapped to." type: "string" example: "127.0.0.1" HostPort: description: "Host port number that the container's port is mapped to." type: "string" example: "4443" GraphDriverData: description: "Information about a container's graph driver." type: "object" required: [Name, Data] properties: Name: type: "string" x-nullable: false Data: type: "object" x-nullable: false additionalProperties: type: "string" Image: type: "object" required: - Id - Parent - Comment - Created - Container - DockerVersion - Author - Architecture - Os - Size - VirtualSize - GraphDriver - RootFS properties: Id: type: "string" x-nullable: false RepoTags: type: "array" items: type: "string" RepoDigests: type: "array" items: type: "string" Parent: type: "string" x-nullable: false Comment: type: "string" x-nullable: false Created: type: "string" x-nullable: false Container: type: "string" x-nullable: false ContainerConfig: $ref: "#/definitions/ContainerConfig" DockerVersion: type: "string" x-nullable: false Author: type: "string" x-nullable: false Config: $ref: "#/definitions/ContainerConfig" Architecture: type: "string" x-nullable: false Os: type: "string" x-nullable: false OsVersion: type: "string" Size: type: "integer" format: "int64" x-nullable: false VirtualSize: type: "integer" format: "int64" x-nullable: false GraphDriver: $ref: "#/definitions/GraphDriverData" RootFS: type: "object" required: [Type] properties: Type: type: "string" x-nullable: false Layers: type: "array" items: type: "string" BaseLayer: type: "string" Metadata: type: "object" properties: LastTagTime: type: "string" format: "dateTime" ImageSummary: type: "object" required: - Id - ParentId - RepoTags - RepoDigests - Created - Size - SharedSize - VirtualSize - Labels - Containers properties: Id: type: "string" x-nullable: false ParentId: type: "string" x-nullable: false RepoTags: type: "array" x-nullable: false items: type: "string" RepoDigests: type: "array" x-nullable: false items: type: "string" Created: type: "integer" x-nullable: false Size: type: "integer" x-nullable: false SharedSize: type: "integer" x-nullable: false VirtualSize: type: "integer" x-nullable: false Labels: type: "object" x-nullable: false additionalProperties: type: "string" Containers: x-nullable: false type: "integer" AuthConfig: type: "object" properties: username: type: "string" password: type: "string" email: type: "string" serveraddress: type: "string" example: username: "hannibal" password: "xxxx" serveraddress: "https://index.docker.io/v1/" ProcessConfig: type: "object" properties: privileged: type: "boolean" user: type: "string" tty: type: "boolean" entrypoint: type: "string" arguments: type: "array" items: type: "string" Volume: type: "object" required: [Name, Driver, Mountpoint, Labels, Scope, Options] properties: Name: type: "string" description: "Name of the volume." x-nullable: false Driver: type: "string" description: "Name of the volume driver used by the volume." x-nullable: false Mountpoint: type: "string" description: "Mount path of the volume on the host." x-nullable: false CreatedAt: type: "string" format: "dateTime" description: "Date/Time the volume was created." Status: type: "object" description: | Low-level details about the volume, provided by the volume driver. Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`. The `Status` field is optional, and is omitted if the volume driver does not support this feature. additionalProperties: type: "object" Labels: type: "object" description: "User-defined key/value metadata." x-nullable: false additionalProperties: type: "string" Scope: type: "string" description: | The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level. default: "local" x-nullable: false enum: ["local", "global"] Options: type: "object" description: | The driver specific options used when creating the volume. additionalProperties: type: "string" UsageData: type: "object" x-nullable: true required: [Size, RefCount] description: | Usage details about the volume. This information is used by the `GET /system/df` endpoint, and omitted in other endpoints. properties: Size: type: "integer" default: -1 description: | Amount of disk space used by the volume (in bytes). This information is only available for volumes created with the `"local"` volume driver. For volumes created with other volume drivers, this field is set to `-1` ("not available") x-nullable: false RefCount: type: "integer" default: -1 description: | The number of containers referencing this volume. This field is set to `-1` if the reference-count is not available. x-nullable: false example: Name: "tardis" Driver: "custom" Mountpoint: "/var/lib/docker/volumes/tardis" Status: hello: "world" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" CreatedAt: "2016-06-07T20:31:11.853781916Z" Network: type: "object" properties: Name: type: "string" Id: type: "string" Created: type: "string" format: "dateTime" Scope: type: "string" Driver: type: "string" EnableIPv6: type: "boolean" IPAM: $ref: "#/definitions/IPAM" Internal: type: "boolean" Attachable: type: "boolean" Ingress: type: "boolean" Containers: type: "object" additionalProperties: $ref: "#/definitions/NetworkContainer" Options: type: "object" additionalProperties: type: "string" Labels: type: "object" additionalProperties: type: "string" example: Name: "net01" Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99" Created: "2016-10-19T04:33:30.360899459Z" Scope: "local" Driver: "bridge" EnableIPv6: false IPAM: Driver: "default" Config: - Subnet: "172.19.0.0/16" Gateway: "172.19.0.1" Options: foo: "bar" Internal: false Attachable: false Ingress: false Containers: 19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c: Name: "test" EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a" MacAddress: "02:42:ac:13:00:02" IPv4Address: "172.19.0.2/16" IPv6Address: "" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" IPAM: type: "object" properties: Driver: description: "Name of the IPAM driver to use." type: "string" default: "default" Config: description: | List of IPAM configuration options, specified as a map: ``` {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>} ``` type: "array" items: type: "object" additionalProperties: type: "string" Options: description: "Driver-specific options, specified as a map." type: "object" additionalProperties: type: "string" NetworkContainer: type: "object" properties: Name: type: "string" EndpointID: type: "string" MacAddress: type: "string" IPv4Address: type: "string" IPv6Address: type: "string" BuildInfo: type: "object" properties: id: type: "string" stream: type: "string" error: type: "string" errorDetail: $ref: "#/definitions/ErrorDetail" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" aux: $ref: "#/definitions/ImageID" BuildCache: type: "object" properties: ID: type: "string" Parent: type: "string" Type: type: "string" Description: type: "string" InUse: type: "boolean" Shared: type: "boolean" Size: description: | Amount of disk space used by the build cache (in bytes). type: "integer" CreatedAt: description: | Date and time at which the build cache was created in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" LastUsedAt: description: | Date and time at which the build cache was last used in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" x-nullable: true example: "2017-08-09T07:09:37.632105588Z" UsageCount: type: "integer" ImageID: type: "object" description: "Image ID or Digest" properties: ID: type: "string" example: ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" CreateImageInfo: type: "object" properties: id: type: "string" error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" PushImageInfo: type: "object" properties: error: type: "string" status: type: "string" progress: type: "string" progressDetail: $ref: "#/definitions/ProgressDetail" ErrorDetail: type: "object" properties: code: type: "integer" message: type: "string" ProgressDetail: type: "object" properties: current: type: "integer" total: type: "integer" ErrorResponse: description: "Represents an error." type: "object" required: ["message"] properties: message: description: "The error message." type: "string" x-nullable: false example: message: "Something went wrong." IdResponse: description: "Response to an API call that returns just an Id" type: "object" required: ["Id"] properties: Id: description: "The id of the newly created object." type: "string" x-nullable: false EndpointSettings: description: "Configuration for a network endpoint." type: "object" properties: # Configurations IPAMConfig: $ref: "#/definitions/EndpointIPAMConfig" Links: type: "array" items: type: "string" example: - "container_1" - "container_2" Aliases: type: "array" items: type: "string" example: - "server_x" - "server_y" # Operational data NetworkID: description: | Unique ID of the network. type: "string" example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a" EndpointID: description: | Unique ID for the service endpoint in a Sandbox. type: "string" example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b" Gateway: description: | Gateway address for this network. type: "string" example: "172.17.0.1" IPAddress: description: | IPv4 address. type: "string" example: "172.17.0.4" IPPrefixLen: description: | Mask length of the IPv4 address. type: "integer" example: 16 IPv6Gateway: description: | IPv6 gateway address. type: "string" example: "2001:db8:2::100" GlobalIPv6Address: description: | Global IPv6 address. type: "string" example: "2001:db8::5689" GlobalIPv6PrefixLen: description: | Mask length of the global IPv6 address. type: "integer" format: "int64" example: 64 MacAddress: description: | MAC address for the endpoint on this network. type: "string" example: "02:42:ac:11:00:04" DriverOpts: description: | DriverOpts is a mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" x-nullable: true additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" EndpointIPAMConfig: description: | EndpointIPAMConfig represents an endpoint's IPAM configuration. type: "object" x-nullable: true properties: IPv4Address: type: "string" example: "172.20.30.33" IPv6Address: type: "string" example: "2001:db8:abcd::3033" LinkLocalIPs: type: "array" items: type: "string" example: - "169.254.34.68" - "fe80::3468" PluginMount: type: "object" x-nullable: false required: [Name, Description, Settable, Source, Destination, Type, Options] properties: Name: type: "string" x-nullable: false example: "some-mount" Description: type: "string" x-nullable: false example: "This is a mount that's used by the plugin." Settable: type: "array" items: type: "string" Source: type: "string" example: "/var/lib/docker/plugins/" Destination: type: "string" x-nullable: false example: "/mnt/state" Type: type: "string" x-nullable: false example: "bind" Options: type: "array" items: type: "string" example: - "rbind" - "rw" PluginDevice: type: "object" required: [Name, Description, Settable, Path] x-nullable: false properties: Name: type: "string" x-nullable: false Description: type: "string" x-nullable: false Settable: type: "array" items: type: "string" Path: type: "string" example: "/dev/fuse" PluginEnv: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" Description: x-nullable: false type: "string" Settable: type: "array" items: type: "string" Value: type: "string" PluginInterfaceType: type: "object" x-nullable: false required: [Prefix, Capability, Version] properties: Prefix: type: "string" x-nullable: false Capability: type: "string" x-nullable: false Version: type: "string" x-nullable: false Plugin: description: "A plugin for the Engine API" type: "object" required: [Settings, Enabled, Config, Name] properties: Id: type: "string" example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078" Name: type: "string" x-nullable: false example: "tiborvass/sample-volume-plugin" Enabled: description: True if the plugin is running. False if the plugin is not running, only installed. type: "boolean" x-nullable: false example: true Settings: description: "Settings that can be modified by users." type: "object" x-nullable: false required: [Args, Devices, Env, Mounts] properties: Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: type: "string" example: - "DEBUG=0" Args: type: "array" items: type: "string" Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PluginReference: description: "plugin remote reference used to push/pull the plugin" type: "string" x-nullable: false example: "localhost:5000/tiborvass/sample-volume-plugin:latest" Config: description: "The config of a plugin." type: "object" x-nullable: false required: - Description - Documentation - Interface - Entrypoint - WorkDir - Network - Linux - PidHost - PropagatedMount - IpcHost - Mounts - Env - Args properties: DockerVersion: description: "Docker Version used to create the plugin" type: "string" x-nullable: false example: "17.06.0-ce" Description: type: "string" x-nullable: false example: "A sample volume plugin for Docker" Documentation: type: "string" x-nullable: false example: "https://docs.docker.com/engine/extend/plugins/" Interface: description: "The interface between Docker and the plugin" x-nullable: false type: "object" required: [Types, Socket] properties: Types: type: "array" items: $ref: "#/definitions/PluginInterfaceType" example: - "docker.volumedriver/1.0" Socket: type: "string" x-nullable: false example: "plugins.sock" ProtocolScheme: type: "string" example: "some.protocol/v1.0" description: "Protocol to use for clients connecting to the plugin." enum: - "" - "moby.plugins.http/v1" Entrypoint: type: "array" items: type: "string" example: - "/usr/bin/sample-volume-plugin" - "/data" WorkDir: type: "string" x-nullable: false example: "/bin/" User: type: "object" x-nullable: false properties: UID: type: "integer" format: "uint32" example: 1000 GID: type: "integer" format: "uint32" example: 1000 Network: type: "object" x-nullable: false required: [Type] properties: Type: x-nullable: false type: "string" example: "host" Linux: type: "object" x-nullable: false required: [Capabilities, AllowAllDevices, Devices] properties: Capabilities: type: "array" items: type: "string" example: - "CAP_SYS_ADMIN" - "CAP_SYSLOG" AllowAllDevices: type: "boolean" x-nullable: false example: false Devices: type: "array" items: $ref: "#/definitions/PluginDevice" PropagatedMount: type: "string" x-nullable: false example: "/mnt/volumes" IpcHost: type: "boolean" x-nullable: false example: false PidHost: type: "boolean" x-nullable: false example: false Mounts: type: "array" items: $ref: "#/definitions/PluginMount" Env: type: "array" items: $ref: "#/definitions/PluginEnv" example: - Name: "DEBUG" Description: "If set, prints debug messages" Settable: null Value: "0" Args: type: "object" x-nullable: false required: [Name, Description, Settable, Value] properties: Name: x-nullable: false type: "string" example: "args" Description: x-nullable: false type: "string" example: "command line arguments" Settable: type: "array" items: type: "string" Value: type: "array" items: type: "string" rootfs: type: "object" properties: type: type: "string" example: "layers" diff_ids: type: "array" items: type: "string" example: - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887" - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ObjectVersion: description: | The version number of the object such as node, service, etc. This is needed to avoid conflicting writes. The client must send the version number along with the modified specification when updating these objects. This approach ensures safe concurrency and determinism in that the change on the object may not be applied if the version number has changed from the last read. In other words, if two update requests specify the same base version, only one of the requests can succeed. As a result, two separate update requests that happen at the same time will not unintentionally overwrite each other. type: "object" properties: Index: type: "integer" format: "uint64" example: 373531 NodeSpec: type: "object" properties: Name: description: "Name for the node." type: "string" example: "my-node" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Role: description: "Role of the node." type: "string" enum: - "worker" - "manager" example: "manager" Availability: description: "Availability of the node." type: "string" enum: - "active" - "pause" - "drain" example: "active" example: Availability: "active" Name: "node-name" Role: "manager" Labels: foo: "bar" Node: type: "object" properties: ID: type: "string" example: "24ifsmvkjbyhk" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the node was added to the swarm in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the node was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/NodeSpec" Description: $ref: "#/definitions/NodeDescription" Status: $ref: "#/definitions/NodeStatus" ManagerStatus: $ref: "#/definitions/ManagerStatus" NodeDescription: description: | NodeDescription encapsulates the properties of the Node as reported by the agent. type: "object" properties: Hostname: type: "string" example: "bf3067039e47" Platform: $ref: "#/definitions/Platform" Resources: $ref: "#/definitions/ResourceObject" Engine: $ref: "#/definitions/EngineDescription" TLSInfo: $ref: "#/definitions/TLSInfo" Platform: description: | Platform represents the platform (Arch/OS). type: "object" properties: Architecture: description: | Architecture represents the hardware architecture (for example, `x86_64`). type: "string" example: "x86_64" OS: description: | OS represents the Operating System (for example, `linux` or `windows`). type: "string" example: "linux" EngineDescription: description: "EngineDescription provides information about an engine." type: "object" properties: EngineVersion: type: "string" example: "17.06.0" Labels: type: "object" additionalProperties: type: "string" example: foo: "bar" Plugins: type: "array" items: type: "object" properties: Type: type: "string" Name: type: "string" example: - Type: "Log" Name: "awslogs" - Type: "Log" Name: "fluentd" - Type: "Log" Name: "gcplogs" - Type: "Log" Name: "gelf" - Type: "Log" Name: "journald" - Type: "Log" Name: "json-file" - Type: "Log" Name: "logentries" - Type: "Log" Name: "splunk" - Type: "Log" Name: "syslog" - Type: "Network" Name: "bridge" - Type: "Network" Name: "host" - Type: "Network" Name: "ipvlan" - Type: "Network" Name: "macvlan" - Type: "Network" Name: "null" - Type: "Network" Name: "overlay" - Type: "Volume" Name: "local" - Type: "Volume" Name: "localhost:5000/vieux/sshfs:latest" - Type: "Volume" Name: "vieux/sshfs:latest" TLSInfo: description: | Information about the issuer of leaf TLS certificates and the trusted root CA certificate. type: "object" properties: TrustRoot: description: | The root CA certificate(s) that are used to validate leaf TLS certificates. type: "string" CertIssuerSubject: description: The base64-url-safe-encoded raw subject bytes of the issuer. type: "string" CertIssuerPublicKey: description: | The base64-url-safe-encoded raw public key bytes of the issuer. type: "string" example: TrustRoot: | -----BEGIN CERTIFICATE----- MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0 MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf 3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H -----END CERTIFICATE----- CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh" CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A==" NodeStatus: description: | NodeStatus represents the status of a node. It provides the current status of the node, as seen by the manager. type: "object" properties: State: $ref: "#/definitions/NodeState" Message: type: "string" example: "" Addr: description: "IP address of the node." type: "string" example: "172.17.0.2" NodeState: description: "NodeState represents the state of a node." type: "string" enum: - "unknown" - "down" - "ready" - "disconnected" example: "ready" ManagerStatus: description: | ManagerStatus represents the status of a manager. It provides the current status of a node's manager component, if the node is a manager. x-nullable: true type: "object" properties: Leader: type: "boolean" default: false example: true Reachability: $ref: "#/definitions/Reachability" Addr: description: | The IP address and port at which the manager is reachable. type: "string" example: "10.0.0.46:2377" Reachability: description: "Reachability represents the reachability of a node." type: "string" enum: - "unknown" - "unreachable" - "reachable" example: "reachable" SwarmSpec: description: "User modifiable swarm configuration." type: "object" properties: Name: description: "Name of the swarm." type: "string" example: "default" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.corp.type: "production" com.example.corp.department: "engineering" Orchestration: description: "Orchestration configuration." type: "object" x-nullable: true properties: TaskHistoryRetentionLimit: description: | The number of historic tasks to keep per instance or node. If negative, never remove completed or failed tasks. type: "integer" format: "int64" example: 10 Raft: description: "Raft configuration." type: "object" properties: SnapshotInterval: description: "The number of log entries between snapshots." type: "integer" format: "uint64" example: 10000 KeepOldSnapshots: description: | The number of snapshots to keep beyond the current snapshot. type: "integer" format: "uint64" LogEntriesForSlowFollowers: description: | The number of log entries to keep around to sync up slow followers after a snapshot is created. type: "integer" format: "uint64" example: 500 ElectionTick: description: | The number of ticks that a follower will wait for a message from the leader before becoming a candidate and starting an election. `ElectionTick` must be greater than `HeartbeatTick`. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 3 HeartbeatTick: description: | The number of ticks between heartbeats. Every HeartbeatTick ticks, the leader will send a heartbeat to the followers. A tick currently defaults to one second, so these translate directly to seconds currently, but this is NOT guaranteed. type: "integer" example: 1 Dispatcher: description: "Dispatcher configuration." type: "object" x-nullable: true properties: HeartbeatPeriod: description: | The delay for an agent to send a heartbeat to the dispatcher. type: "integer" format: "int64" example: 5000000000 CAConfig: description: "CA configuration." type: "object" x-nullable: true properties: NodeCertExpiry: description: "The duration node certificates are issued for." type: "integer" format: "int64" example: 7776000000000000 ExternalCAs: description: | Configuration for forwarding signing requests to an external certificate authority. type: "array" items: type: "object" properties: Protocol: description: | Protocol for communication with the external CA (currently only `cfssl` is supported). type: "string" enum: - "cfssl" default: "cfssl" URL: description: | URL where certificate signing requests should be sent. type: "string" Options: description: | An object with key/value pairs that are interpreted as protocol-specific options for the external CA driver. type: "object" additionalProperties: type: "string" CACert: description: | The root CA certificate (in PEM format) this external CA uses to issue TLS certificates (assumed to be to the current swarm root CA certificate if not provided). type: "string" SigningCACert: description: | The desired signing CA certificate for all swarm node TLS leaf certificates, in PEM format. type: "string" SigningCAKey: description: | The desired signing CA key for all swarm node TLS leaf certificates, in PEM format. type: "string" ForceRotate: description: | An integer whose purpose is to force swarm to generate a new signing CA certificate and key, if none have been specified in `SigningCACert` and `SigningCAKey` format: "uint64" type: "integer" EncryptionConfig: description: "Parameters related to encryption-at-rest." type: "object" properties: AutoLockManagers: description: | If set, generate a key and use it to lock data stored on the managers. type: "boolean" example: false TaskDefaults: description: "Defaults for creating tasks in this cluster." type: "object" properties: LogDriver: description: | The log driver to use for tasks created in the orchestrator if unspecified by a service. Updating this value only affects new tasks. Existing tasks continue to use their previously configured log driver until recreated. type: "object" properties: Name: description: | The log driver to use as a default for new tasks. type: "string" example: "json-file" Options: description: | Driver-specific options for the selectd log driver, specified as key/value pairs. type: "object" additionalProperties: type: "string" example: "max-file": "10" "max-size": "100m" # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but # without `JoinTokens`. ClusterInfo: description: | ClusterInfo represents information about the swarm as is returned by the "/info" endpoint. Join-tokens are not included. x-nullable: true type: "object" properties: ID: description: "The ID of the swarm." type: "string" example: "abajmipo7b4xz5ip2nrla6b11" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: description: | Date and time at which the swarm was initialised in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2016-08-18T10:44:24.496525531Z" UpdatedAt: description: | Date and time at which the swarm was last updated in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" format: "dateTime" example: "2017-08-09T07:09:37.632105588Z" Spec: $ref: "#/definitions/SwarmSpec" TLSInfo: $ref: "#/definitions/TLSInfo" RootRotationInProgress: description: | Whether there is currently a root CA rotation in progress for the swarm type: "boolean" example: false DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. If no port is set or is set to 0, the default port (4789) is used. type: "integer" format: "uint32" default: 4789 example: 4789 DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" format: "CIDR" example: ["10.10.0.0/16", "20.20.0.0/16"] SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" maximum: 29 default: 24 example: 24 JoinTokens: description: | JoinTokens contains the tokens workers and managers need to join the swarm. type: "object" properties: Worker: description: | The token workers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx" Manager: description: | The token managers can use to join the swarm. type: "string" example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" Swarm: type: "object" allOf: - $ref: "#/definitions/ClusterInfo" - type: "object" properties: JoinTokens: $ref: "#/definitions/JoinTokens" TaskSpec: description: "User modifiable task configuration." type: "object" properties: PluginSpec: type: "object" description: | Plugin spec for the service. *(Experimental release only.)* <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Name: description: "The name or 'alias' to use for the plugin." type: "string" Remote: description: "The plugin image reference to use." type: "string" Disabled: description: "Disable the plugin once scheduled." type: "boolean" PluginPrivilege: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" ContainerSpec: type: "object" description: | Container spec for the service. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. properties: Image: description: "The image name to use for the container" type: "string" Labels: description: "User-defined key/value data." type: "object" additionalProperties: type: "string" Command: description: "The command to be run in the image." type: "array" items: type: "string" Args: description: "Arguments to the command." type: "array" items: type: "string" Hostname: description: | The hostname to use for the container, as a valid [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname. type: "string" Env: description: | A list of environment variables in the form `VAR=value`. type: "array" items: type: "string" Dir: description: "The working directory for commands to run in." type: "string" User: description: "The user inside the container." type: "string" Groups: type: "array" description: | A list of additional groups that the container process will run as. items: type: "string" Privileges: type: "object" description: "Security options for the container" properties: CredentialSpec: type: "object" description: "CredentialSpec for managed service account (Windows only)" properties: Config: type: "string" example: "0bt9dmxjvjiqermk6xrop3ekq" description: | Load credential spec from a Swarm Config with the given ID. The specified config must also be present in the Configs field with the Runtime property set. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. File: type: "string" example: "spec.json" description: | Load credential spec from this file. The file is read by the daemon, and must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. Registry: type: "string" description: | Load credential spec from this value in the Windows registry. The specified registry value must be located in: `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs` <p><br /></p> > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`, > and `CredentialSpec.Config` are mutually exclusive. SELinuxContext: type: "object" description: "SELinux labels of the container" properties: Disable: type: "boolean" description: "Disable SELinux" User: type: "string" description: "SELinux user label" Role: type: "string" description: "SELinux role label" Type: type: "string" description: "SELinux type label" Level: type: "string" description: "SELinux level label" TTY: description: "Whether a pseudo-TTY should be allocated." type: "boolean" OpenStdin: description: "Open `stdin`" type: "boolean" ReadOnly: description: "Mount the container's root filesystem as read only." type: "boolean" Mounts: description: | Specification for mounts to be added to containers created as part of the service. type: "array" items: $ref: "#/definitions/Mount" StopSignal: description: "Signal to stop the container." type: "string" StopGracePeriod: description: | Amount of time to wait for the container to terminate before forcefully killing it. type: "integer" format: "int64" HealthCheck: $ref: "#/definitions/HealthConfig" Hosts: type: "array" description: | A list of hostname/IP mappings to add to the container's `hosts` file. The format of extra hosts is specified in the [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html) man page: IP_address canonical_hostname [aliases...] items: type: "string" DNSConfig: description: | Specification for DNS related configurations in resolver configuration file (`resolv.conf`). type: "object" properties: Nameservers: description: "The IP addresses of the name servers." type: "array" items: type: "string" Search: description: "A search list for host-name lookup." type: "array" items: type: "string" Options: description: | A list of internal resolver variables to be modified (e.g., `debug`, `ndots:3`, etc.). type: "array" items: type: "string" Secrets: description: | Secrets contains references to zero or more secrets that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" SecretID: description: | SecretID represents the ID of the specific secret that we're referencing. type: "string" SecretName: description: | SecretName is the name of the secret that this references, but this is just provided for lookup/display purposes. The secret in the reference will be identified by its ID. type: "string" Configs: description: | Configs contains references to zero or more configs that will be exposed to the service. type: "array" items: type: "object" properties: File: description: | File represents a specific target that is backed by a file. <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive type: "object" properties: Name: description: | Name represents the final filename in the filesystem. type: "string" UID: description: "UID represents the file UID." type: "string" GID: description: "GID represents the file GID." type: "string" Mode: description: "Mode represents the FileMode of the file." type: "integer" format: "uint32" Runtime: description: | Runtime represents a target that is not mounted into the container but is used by the task <p><br /><p> > **Note**: `Configs.File` and `Configs.Runtime` are mutually > exclusive type: "object" ConfigID: description: | ConfigID represents the ID of the specific config that we're referencing. type: "string" ConfigName: description: | ConfigName is the name of the config that this references, but this is just provided for lookup/display purposes. The config in the reference will be identified by its ID. type: "string" Isolation: type: "string" description: | Isolation technology of the containers running the service. (Windows only) enum: - "default" - "process" - "hyperv" Init: description: | Run an init inside the container that forwards signals and reaps processes. This field is omitted if empty, and the default (as configured on the daemon) is used. type: "boolean" x-nullable: true Sysctls: description: | Set kernel namedspaced parameters (sysctls) in the container. The Sysctls option on services accepts the same sysctls as the are supported on containers. Note that while the same sysctls are supported, no guarantees or checks are made about their suitability for a clustered environment, and it's up to the user to determine whether a given sysctl will work properly in a Service. type: "object" additionalProperties: type: "string" # This option is not used by Windows containers CapabilityAdd: type: "array" description: | A list of kernel capabilities to add to the default set for the container. items: type: "string" example: - "CAP_NET_RAW" - "CAP_SYS_ADMIN" - "CAP_SYS_CHROOT" - "CAP_SYSLOG" CapabilityDrop: type: "array" description: | A list of kernel capabilities to drop from the default set for the container. items: type: "string" example: - "CAP_NET_RAW" Ulimits: description: | A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`" type: "array" items: type: "object" properties: Name: description: "Name of ulimit" type: "string" Soft: description: "Soft limit" type: "integer" Hard: description: "Hard limit" type: "integer" NetworkAttachmentSpec: description: | Read-only spec type for non-swarm containers attached to swarm overlay networks. <p><br /></p> > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are > mutually exclusive. PluginSpec is only used when the Runtime field > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime > field is set to `attachment`. type: "object" properties: ContainerID: description: "ID of the container represented by this task" type: "string" Resources: description: | Resource requirements which apply to each individual container created as part of the service. type: "object" properties: Limits: description: "Define resources limits." $ref: "#/definitions/Limit" Reservation: description: "Define resources reservation." $ref: "#/definitions/ResourceObject" RestartPolicy: description: | Specification for the restart policy which applies to containers created as part of this service. type: "object" properties: Condition: description: "Condition for restart." type: "string" enum: - "none" - "on-failure" - "any" Delay: description: "Delay between restart attempts." type: "integer" format: "int64" MaxAttempts: description: | Maximum attempts to restart a given container before giving up (default value is 0, which is ignored). type: "integer" format: "int64" default: 0 Window: description: | Windows is the time window used to evaluate the restart policy (default value is 0, which is unbounded). type: "integer" format: "int64" default: 0 Placement: type: "object" properties: Constraints: description: | An array of constraint expressions to limit the set of nodes where a task can be scheduled. Constraint expressions can either use a _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: node attribute | matches | example ---------------------|--------------------------------|----------------------------------------------- `node.id` | Node ID | `node.id==2ivku8v2gvtg4` `node.hostname` | Node hostname | `node.hostname!=node-2` `node.role` | Node role (`manager`/`worker`) | `node.role==manager` `node.platform.os` | Node operating system | `node.platform.os==windows` `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` `node.labels` | User-defined node labels | `node.labels.security==high` `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-14.04` `engine.labels` apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add `node.labels` for operational purposes by using the [`node update endpoint`](#operation/NodeUpdate). type: "array" items: type: "string" example: - "node.hostname!=node3.corp.example.com" - "node.role!=manager" - "node.labels.type==production" - "node.platform.os==linux" - "node.platform.arch==x86_64" Preferences: description: | Preferences provide a way to make the scheduler aware of factors such as topology. They are provided in order from highest to lowest precedence. type: "array" items: type: "object" properties: Spread: type: "object" properties: SpreadDescriptor: description: | label descriptor, such as `engine.labels.az`. type: "string" example: - Spread: SpreadDescriptor: "node.labels.datacenter" - Spread: SpreadDescriptor: "node.labels.rack" MaxReplicas: description: | Maximum number of replicas for per node (default value is 0, which is unlimited) type: "integer" format: "int64" default: 0 Platforms: description: | Platforms stores all the platforms that the service's image can run on. This field is used in the platform filter for scheduling. If empty, then the platform filter is off, meaning there are no scheduling restrictions. type: "array" items: $ref: "#/definitions/Platform" ForceUpdate: description: | A counter that triggers an update even if no relevant parameters have been changed. type: "integer" Runtime: description: | Runtime is the type of runtime specified for the task executor. type: "string" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" LogDriver: description: | Specifies the log driver to use for tasks created from this spec. If not present, the default one for the swarm will be used, finally falling back to the engine default if not specified. type: "object" properties: Name: type: "string" Options: type: "object" additionalProperties: type: "string" TaskState: type: "string" enum: - "new" - "allocated" - "pending" - "assigned" - "accepted" - "preparing" - "ready" - "starting" - "running" - "complete" - "shutdown" - "failed" - "rejected" - "remove" - "orphaned" Task: type: "object" properties: ID: description: "The ID of the task." type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Name: description: "Name of the task." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Spec: $ref: "#/definitions/TaskSpec" ServiceID: description: "The ID of the service this task is part of." type: "string" Slot: type: "integer" NodeID: description: "The ID of the node that this task is on." type: "string" AssignedGenericResources: $ref: "#/definitions/GenericResources" Status: type: "object" properties: Timestamp: type: "string" format: "dateTime" State: $ref: "#/definitions/TaskState" Message: type: "string" Err: type: "string" ContainerStatus: type: "object" properties: ContainerID: type: "string" PID: type: "integer" ExitCode: type: "integer" DesiredState: $ref: "#/definitions/TaskState" JobIteration: description: | If the Service this Task belongs to is a job-mode service, contains the JobIteration of the Service this Task was created for. Absent if the Task was created for a Replicated or Global Service. $ref: "#/definitions/ObjectVersion" example: ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" AssignedGenericResources: - DiscreteResourceSpec: Kind: "SSD" Value: 3 - NamedResourceSpec: Kind: "GPU" Value: "UUID1" - NamedResourceSpec: Kind: "GPU" Value: "UUID2" ServiceSpec: description: "User modifiable configuration for a service." properties: Name: description: "Name of the service." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" TaskTemplate: $ref: "#/definitions/TaskSpec" Mode: description: "Scheduling mode for the service." type: "object" properties: Replicated: type: "object" properties: Replicas: type: "integer" format: "int64" Global: type: "object" ReplicatedJob: description: | The mode used for services with a finite number of tasks that run to a completed state. type: "object" properties: MaxConcurrent: description: | The maximum number of replicas to run simultaneously. type: "integer" format: "int64" default: 1 TotalCompletions: description: | The total number of replicas desired to reach the Completed state. If unset, will default to the value of `MaxConcurrent` type: "integer" format: "int64" GlobalJob: description: | The mode used for services which run a task to the completed state on each valid node. type: "object" UpdateConfig: description: "Specification for the update strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be updated in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: "Amount of time between updates, in nanoseconds." type: "integer" format: "int64" FailureAction: description: | Action to take if an updated task fails to run, or stops running during the update. type: "string" enum: - "continue" - "pause" - "rollback" Monitor: description: | Amount of time to monitor each updated task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during an update before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling out an updated task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" RollbackConfig: description: "Specification for the rollback strategy of the service." type: "object" properties: Parallelism: description: | Maximum number of tasks to be rolled back in one iteration (0 means unlimited parallelism). type: "integer" format: "int64" Delay: description: | Amount of time between rollback iterations, in nanoseconds. type: "integer" format: "int64" FailureAction: description: | Action to take if an rolled back task fails to run, or stops running during the rollback. type: "string" enum: - "continue" - "pause" Monitor: description: | Amount of time to monitor each rolled back task for failures, in nanoseconds. type: "integer" format: "int64" MaxFailureRatio: description: | The fraction of tasks that may fail during a rollback before the failure action is invoked, specified as a floating point number between 0 and 1. type: "number" default: 0 Order: description: | The order of operations when rolling back a task. Either the old task is shut down before the new task is started, or the new task is started before the old task is shut down. type: "string" enum: - "stop-first" - "start-first" Networks: description: "Specifies which networks the service should attach to." type: "array" items: $ref: "#/definitions/NetworkAttachmentConfig" EndpointSpec: $ref: "#/definitions/EndpointSpec" EndpointPortConfig: type: "object" properties: Name: type: "string" Protocol: type: "string" enum: - "tcp" - "udp" - "sctp" TargetPort: description: "The port inside the container." type: "integer" PublishedPort: description: "The port on the swarm hosts." type: "integer" PublishMode: description: | The mode in which port is published. <p><br /></p> - "ingress" makes the target port accessible on every node, regardless of whether there is a task for the service running on that node or not. - "host" bypasses the routing mesh and publish the port directly on the swarm node where that service is running. type: "string" enum: - "ingress" - "host" default: "ingress" example: "ingress" EndpointSpec: description: "Properties that can be configured to access and load balance a service." type: "object" properties: Mode: description: | The mode of resolution to use for internal load balancing between tasks. type: "string" enum: - "vip" - "dnsrr" default: "vip" Ports: description: | List of exposed ports that this service is accessible on from the outside. Ports can only be provided if `vip` resolution mode is used. type: "array" items: $ref: "#/definitions/EndpointPortConfig" Service: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ServiceSpec" Endpoint: type: "object" properties: Spec: $ref: "#/definitions/EndpointSpec" Ports: type: "array" items: $ref: "#/definitions/EndpointPortConfig" VirtualIPs: type: "array" items: type: "object" properties: NetworkID: type: "string" Addr: type: "string" UpdateStatus: description: "The status of a service update." type: "object" properties: State: type: "string" enum: - "updating" - "paused" - "completed" StartedAt: type: "string" format: "dateTime" CompletedAt: type: "string" format: "dateTime" Message: type: "string" ServiceStatus: description: | The status of the service's tasks. Provided only when requested as part of a ServiceList operation. type: "object" properties: RunningTasks: description: | The number of tasks for the service currently in the Running state. type: "integer" format: "uint64" example: 7 DesiredTasks: description: | The number of tasks for the service desired to be running. For replicated services, this is the replica count from the service spec. For global services, this is computed by taking count of all tasks for the service with a Desired State other than Shutdown. type: "integer" format: "uint64" example: 10 CompletedTasks: description: | The number of tasks for a job that are in the Completed state. This field must be cross-referenced with the service type, as the value of 0 may mean the service is not in a job mode, or it may mean the job-mode service has no tasks yet Completed. type: "integer" format: "uint64" JobStatus: description: | The status of the service when it is in one of ReplicatedJob or GlobalJob modes. Absent on Replicated and Global mode services. The JobIteration is an ObjectVersion, but unlike the Service's version, does not need to be sent with an update request. type: "object" properties: JobIteration: description: | JobIteration is a value increased each time a Job is executed, successfully or otherwise. "Executed", in this case, means the job as a whole has been started, not that an individual Task has been launched. A job is "Executed" when its ServiceSpec is updated. JobIteration can be used to disambiguate Tasks belonging to different executions of a job. Though JobIteration will increase with each subsequent execution, it may not necessarily increase by 1, and so JobIteration should not be used to $ref: "#/definitions/ObjectVersion" LastExecution: description: | The last time, as observed by the server, that this job was started. type: "string" format: "dateTime" example: ID: "9mnpnzenvg8p8tdbtq4wvbkcz" Version: Index: 19 CreatedAt: "2016-06-07T21:05:51.880065305Z" UpdatedAt: "2016-06-07T21:07:29.962229872Z" Spec: Name: "hopeful_cori" TaskTemplate: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Endpoint: Spec: Mode: "vip" Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 Ports: - Protocol: "tcp" TargetPort: 6379 PublishedPort: 30001 VirtualIPs: - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.2/16" - NetworkID: "4qvuz4ko70xaltuqbt8956gd1" Addr: "10.255.0.3/16" ImageDeleteResponseItem: type: "object" properties: Untagged: description: "The image ID of an image that was untagged" type: "string" Deleted: description: "The image ID of an image that was deleted" type: "string" ServiceUpdateResponse: type: "object" properties: Warnings: description: "Optional warning messages" type: "array" items: type: "string" example: Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" ContainerSummary: type: "array" items: type: "object" properties: Id: description: "The ID of this container" type: "string" x-go-name: "ID" Names: description: "The names that this container has been given" type: "array" items: type: "string" Image: description: "The name of the image used when creating this container" type: "string" ImageID: description: "The ID of the image that this container was created from" type: "string" Command: description: "Command to run when starting the container" type: "string" Created: description: "When the container was created" type: "integer" format: "int64" Ports: description: "The ports exposed by this container" type: "array" items: $ref: "#/definitions/Port" SizeRw: description: "The size of files that have been created or changed by this container" type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container" type: "integer" format: "int64" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" State: description: "The state of this container (e.g. `Exited`)" type: "string" Status: description: "Additional human-readable status of this container (e.g. `Exit 0`)" type: "string" HostConfig: type: "object" properties: NetworkMode: type: "string" NetworkSettings: description: "A summary of the container's network settings" type: "object" properties: Networks: type: "object" additionalProperties: $ref: "#/definitions/EndpointSettings" Mounts: type: "array" items: $ref: "#/definitions/Mount" Driver: description: "Driver represents a driver (network, logging, secrets)." type: "object" required: [Name] properties: Name: description: "Name of the driver." type: "string" x-nullable: false example: "some-driver" Options: description: "Key/value map of driver-specific options." type: "object" x-nullable: false additionalProperties: type: "string" example: OptionA: "value for driver-specific option A" OptionB: "value for driver-specific option B" SecretSpec: type: "object" properties: Name: description: "User-defined name of the secret." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) data to store as secret. This field is only used to _create_ a secret, and is not returned by other endpoints. type: "string" example: "" Driver: description: | Name of the secrets driver used to fetch the secret's value from an external secret store. $ref: "#/definitions/Driver" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Secret: type: "object" properties: ID: type: "string" example: "blt1owaxmitz71s9v5zh81zun" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" UpdatedAt: type: "string" format: "dateTime" example: "2017-07-20T13:55:28.678958722Z" Spec: $ref: "#/definitions/SecretSpec" ConfigSpec: type: "object" properties: Name: description: "User-defined name of the config." type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" Data: description: | Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) config data. type: "string" Templating: description: | Templating driver, if applicable Templating controls whether and how to evaluate the config payload as a template. If no driver is set, no templating is used. $ref: "#/definitions/Driver" Config: type: "object" properties: ID: type: "string" Version: $ref: "#/definitions/ObjectVersion" CreatedAt: type: "string" format: "dateTime" UpdatedAt: type: "string" format: "dateTime" Spec: $ref: "#/definitions/ConfigSpec" ContainerState: description: | ContainerState stores container's running state. It's part of ContainerJSONBase and will be returned by the "inspect" command. type: "object" properties: Status: description: | String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead". type: "string" enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"] example: "running" Running: description: | Whether this container is running. Note that a running container can be _paused_. The `Running` and `Paused` booleans are not mutually exclusive: When pausing a container (on Linux), the freezer cgroup is used to suspend all processes in the container. Freezing the process requires the process to be running. As a result, paused containers are both `Running` _and_ `Paused`. Use the `Status` field instead to determine if a container's state is "running". type: "boolean" example: true Paused: description: "Whether this container is paused." type: "boolean" example: false Restarting: description: "Whether this container is restarting." type: "boolean" example: false OOMKilled: description: | Whether this container has been killed because it ran out of memory. type: "boolean" example: false Dead: type: "boolean" example: false Pid: description: "The process ID of this container" type: "integer" example: 1234 ExitCode: description: "The last exit code of this container" type: "integer" example: 0 Error: type: "string" StartedAt: description: "The time when this container was last started." type: "string" example: "2020-01-06T09:06:59.461876391Z" FinishedAt: description: "The time when this container last exited." type: "string" example: "2020-01-06T09:07:59.461876391Z" Health: x-nullable: true $ref: "#/definitions/Health" SystemVersion: type: "object" description: | Response of Engine API: GET "/version" properties: Platform: type: "object" required: [Name] properties: Name: type: "string" Components: type: "array" description: | Information about system components items: type: "object" x-go-name: ComponentVersion required: [Name, Version] properties: Name: description: | Name of the component type: "string" example: "Engine" Version: description: | Version of the component type: "string" x-nullable: false example: "19.03.12" Details: description: | Key/value pairs of strings with additional information about the component. These values are intended for informational purposes only, and their content is not defined, and not part of the API specification. These messages can be printed by the client as information to the user. type: "object" x-nullable: true Version: description: "The version of the daemon" type: "string" example: "19.03.12" ApiVersion: description: | The default (and highest) API version that is supported by the daemon type: "string" example: "1.40" MinAPIVersion: description: | The minimum API version that is supported by the daemon type: "string" example: "1.12" GitCommit: description: | The Git commit of the source code that was used to build the daemon type: "string" example: "48a66213fe" GoVersion: description: | The version Go used to compile the daemon, and the version of the Go runtime in use. type: "string" example: "go1.13.14" Os: description: | The operating system that the daemon is running on ("linux" or "windows") type: "string" example: "linux" Arch: description: | The architecture that the daemon is running on type: "string" example: "amd64" KernelVersion: description: | The kernel version (`uname -r`) that the daemon is running on. This field is omitted when empty. type: "string" example: "4.19.76-linuxkit" Experimental: description: | Indicates if the daemon is started with experimental features enabled. This field is omitted when empty / false. type: "boolean" example: true BuildTime: description: | The date and time that the daemon was compiled. type: "string" example: "2020-06-22T15:49:27.000000000+00:00" SystemInfo: type: "object" properties: ID: description: | Unique identifier of the daemon. <p><br /></p> > **Note**: The format of the ID itself is not part of the API, and > should not be considered stable. type: "string" example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS" Containers: description: "Total number of containers on the host." type: "integer" example: 14 ContainersRunning: description: | Number of containers with status `"running"`. type: "integer" example: 3 ContainersPaused: description: | Number of containers with status `"paused"`. type: "integer" example: 1 ContainersStopped: description: | Number of containers with status `"stopped"`. type: "integer" example: 10 Images: description: | Total number of images on the host. Both _tagged_ and _untagged_ (dangling) images are counted. type: "integer" example: 508 Driver: description: "Name of the storage driver in use." type: "string" example: "overlay2" DriverStatus: description: | Information specific to the storage driver, provided as "label" / "value" pairs. This information is provided by the storage driver, and formatted in a way consistent with the output of `docker info` on the command line. <p><br /></p> > **Note**: The information returned in this field, including the > formatting of values and labels, should not be considered stable, > and may change without notice. type: "array" items: type: "array" items: type: "string" example: - ["Backing Filesystem", "extfs"] - ["Supports d_type", "true"] - ["Native Overlay Diff", "true"] DockerRootDir: description: | Root directory of persistent Docker state. Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker` on Windows. type: "string" example: "/var/lib/docker" Plugins: $ref: "#/definitions/PluginsInfo" MemoryLimit: description: "Indicates if the host has memory limit support enabled." type: "boolean" example: true SwapLimit: description: "Indicates if the host has memory swap limit support enabled." type: "boolean" example: true KernelMemory: description: | Indicates if the host has kernel memory limit support enabled. <p><br /></p> > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated > `kmem.limit_in_bytes`. type: "boolean" example: true CpuCfsPeriod: description: | Indicates if CPU CFS(Completely Fair Scheduler) period is supported by the host. type: "boolean" example: true CpuCfsQuota: description: | Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by the host. type: "boolean" example: true CPUShares: description: | Indicates if CPU Shares limiting is supported by the host. type: "boolean" example: true CPUSet: description: | Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host. See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) type: "boolean" example: true PidsLimit: description: "Indicates if the host kernel has PID limit support enabled." type: "boolean" example: true OomKillDisable: description: "Indicates if OOM killer disable is supported on the host." type: "boolean" IPv4Forwarding: description: "Indicates IPv4 forwarding is enabled." type: "boolean" example: true BridgeNfIptables: description: "Indicates if `bridge-nf-call-iptables` is available on the host." type: "boolean" example: true BridgeNfIp6tables: description: "Indicates if `bridge-nf-call-ip6tables` is available on the host." type: "boolean" example: true Debug: description: | Indicates if the daemon is running in debug-mode / with debug-level logging enabled. type: "boolean" example: true NFd: description: | The total number of file Descriptors in use by the daemon process. This information is only returned if debug-mode is enabled. type: "integer" example: 64 NGoroutines: description: | The number of goroutines that currently exist. This information is only returned if debug-mode is enabled. type: "integer" example: 174 SystemTime: description: | Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds. type: "string" example: "2017-08-08T20:28:29.06202363Z" LoggingDriver: description: | The logging driver to use as a default for new containers. type: "string" CgroupDriver: description: | The driver to use for managing cgroups. type: "string" enum: ["cgroupfs", "systemd", "none"] default: "cgroupfs" example: "cgroupfs" CgroupVersion: description: | The version of the cgroup. type: "string" enum: ["1", "2"] default: "1" example: "1" NEventsListener: description: "Number of event listeners subscribed." type: "integer" example: 30 KernelVersion: description: | Kernel version of the host. On Linux, this information obtained from `uname`. On Windows this information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd> registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_. type: "string" example: "4.9.38-moby" OperatingSystem: description: | Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS" or "Windows Server 2016 Datacenter" type: "string" example: "Alpine Linux v3.5" OSVersion: description: | Version of the host's operating system <p><br /></p> > **Note**: The information returned in this field, including its > very existence, and the formatting of values, should not be considered > stable, and may change without notice. type: "string" example: "16.04" OSType: description: | Generic type of the operating system of the host, as returned by the Go runtime (`GOOS`). Currently returned values are "linux" and "windows". A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "linux" Architecture: description: | Hardware architecture of the host, as returned by the Go runtime (`GOARCH`). A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment). type: "string" example: "x86_64" NCPU: description: | The number of logical CPUs usable by the daemon. The number of available CPUs is checked by querying the operating system when the daemon starts. Changes to operating system CPU allocation after the daemon is started are not reflected. type: "integer" example: 4 MemTotal: description: | Total amount of physical memory available on the host, in bytes. type: "integer" format: "int64" example: 2095882240 IndexServerAddress: description: | Address / URL of the index server that is used for image search, and as a default for user authentication for Docker Hub and Docker Cloud. default: "https://index.docker.io/v1/" type: "string" example: "https://index.docker.io/v1/" RegistryConfig: $ref: "#/definitions/RegistryServiceConfig" GenericResources: $ref: "#/definitions/GenericResources" HttpProxy: description: | HTTP-proxy configured for the daemon. This value is obtained from the [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "http://xxxxx:[email protected]:8080" HttpsProxy: description: | HTTPS-proxy configured for the daemon. This value is obtained from the [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL are masked in the API response. Containers do not automatically inherit this configuration. type: "string" example: "https://xxxxx:[email protected]:4443" NoProxy: description: | Comma-separated list of domain extensions for which no proxy should be used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable. Containers do not automatically inherit this configuration. type: "string" example: "*.local, 169.254/16" Name: description: "Hostname of the host." type: "string" example: "node5.corp.example.com" Labels: description: | User-defined labels (key/value metadata) as set on the daemon. <p><br /></p> > **Note**: When part of a Swarm, nodes can both have _daemon_ labels, > set through the daemon configuration, and _node_ labels, set from a > manager node in the Swarm. Node labels are not included in this > field. Node labels can be retrieved using the `/nodes/(id)` endpoint > on a manager node in the Swarm. type: "array" items: type: "string" example: ["storage=ssd", "production"] ExperimentalBuild: description: | Indicates if experimental features are enabled on the daemon. type: "boolean" example: true ServerVersion: description: | Version string of the daemon. > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/) > returns the Swarm version instead of the daemon version, for example > `swarm/1.2.8`. type: "string" example: "17.06.0-ce" ClusterStore: description: | URL of the distributed storage backend. The storage backend is used for multihost networking (to store network and endpoint information) and by the node discovery mechanism. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "consul://consul.corp.example.com:8600/some/path" ClusterAdvertise: description: | The network endpoint that the Engine advertises for the purpose of node discovery. ClusterAdvertise is a `host:port` combination on which the daemon is reachable by other hosts. <p><br /></p> > **Deprecated**: This field is only propagated when using standalone Swarm > mode, and overlay networking using an external k/v store. Overlay > networks with Swarm mode enabled use the built-in raft store, and > this field will be empty. type: "string" example: "node5.corp.example.com:8000" Runtimes: description: | List of [OCI compliant](https://github.com/opencontainers/runtime-spec) runtimes configured on the daemon. Keys hold the "name" used to reference the runtime. The Docker daemon relies on an OCI compliant runtime (invoked via the `containerd` daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux. The default runtime is `runc`, and automatically configured. Additional runtimes can be configured by the user and will be listed here. type: "object" additionalProperties: $ref: "#/definitions/Runtime" default: runc: path: "runc" example: runc: path: "runc" runc-master: path: "/go/bin/runc" custom: path: "/usr/local/bin/my-oci-runtime" runtimeArgs: ["--debug", "--systemd-cgroup=false"] DefaultRuntime: description: | Name of the default OCI runtime that is used when starting containers. The default can be overridden per-container at create time. type: "string" default: "runc" example: "runc" Swarm: $ref: "#/definitions/SwarmInfo" LiveRestoreEnabled: description: | Indicates if live restore is enabled. If enabled, containers are kept running when the daemon is shutdown or upon daemon start if running containers are detected. type: "boolean" default: false example: false Isolation: description: | Represents the isolation technology to use as a default for containers. The supported values are platform-specific. If no isolation value is specified on daemon start, on Windows client, the default is `hyperv`, and on Windows server, the default is `process`. This option is currently not used on other platforms. default: "default" type: "string" enum: - "default" - "hyperv" - "process" InitBinary: description: | Name and, optional, path of the `docker-init` binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "docker-init" ContainerdCommit: $ref: "#/definitions/Commit" RuncCommit: $ref: "#/definitions/Commit" InitCommit: $ref: "#/definitions/Commit" SecurityOptions: description: | List of security features that are enabled on the daemon, such as apparmor, seccomp, SELinux, user-namespaces (userns), and rootless. Additional configuration options for each security feature may be present, and are included as a comma-separated list of key/value pairs. type: "array" items: type: "string" example: - "name=apparmor" - "name=seccomp,profile=default" - "name=selinux" - "name=userns" - "name=rootless" ProductLicense: description: | Reports a summary of the product license on the daemon. If a commercial license has been applied to the daemon, information such as number of nodes, and expiration are included. type: "string" example: "Community Engine" DefaultAddressPools: description: | List of custom default address pools for local networks, which can be specified in the daemon.json file or dockerd option. Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256 10.10.[0-255].0/24 address pools. type: "array" items: type: "object" properties: Base: description: "The network address in CIDR format" type: "string" example: "10.10.0.0/16" Size: description: "The network pool size" type: "integer" example: "24" Warnings: description: | List of warnings / informational messages about missing features, or issues related to the daemon configuration. These messages can be printed by the client as information to the user. type: "array" items: type: "string" example: - "WARNING: No memory limit support" - "WARNING: bridge-nf-call-iptables is disabled" - "WARNING: bridge-nf-call-ip6tables is disabled" # PluginsInfo is a temp struct holding Plugins name # registered with docker daemon. It is used by Info struct PluginsInfo: description: | Available plugins per type. <p><br /></p> > **Note**: Only unmanaged (V1) plugins are included in this list. > V1 plugins are "lazily" loaded, and are not returned in this list > if there is no resource using the plugin. type: "object" properties: Volume: description: "Names of available volume-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["local"] Network: description: "Names of available network-drivers, and network-driver plugins." type: "array" items: type: "string" example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"] Authorization: description: "Names of available authorization plugins." type: "array" items: type: "string" example: ["img-authz-plugin", "hbm"] Log: description: "Names of available logging-drivers, and logging-driver plugins." type: "array" items: type: "string" example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"] RegistryServiceConfig: description: | RegistryServiceConfig stores daemon registry services configuration. type: "object" x-nullable: true properties: AllowNondistributableArtifactsCIDRs: description: | List of IP ranges to which nondistributable artifacts can be pushed, using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632). Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior, and enables the daemon to push nondistributable artifacts to all registries whose resolved IP address is within the subnet described by the CIDR syntax. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] AllowNondistributableArtifactsHostnames: description: | List of registry hostnames to which nondistributable artifacts can be pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`. Some images (for example, Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included. This configuration override this behavior for the specified registries. This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server. > **Warning**: Nondistributable artifacts typically have restrictions > on how and where they can be distributed and shared. Only use this > feature to push artifacts to private registries and ensure that you > are in compliance with any terms that cover redistributing > nondistributable artifacts. type: "array" items: type: "string" example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"] InsecureRegistryCIDRs: description: | List of IP ranges of insecure registries, using the CIDR syntax ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. By default, local registries (`127.0.0.0/8`) are configured as insecure. All other registries are secure. Communicating with an insecure registry is not possible if the daemon assumes that registry is secure. This configuration override this behavior, insecure communication with registries whose resolved IP address is within the subnet described by the CIDR syntax. Registries can also be marked insecure by hostname. Those registries are listed under `IndexConfigs` and have their `Secure` field set to `false`. > **Warning**: Using this option can be useful when running a local > registry, but introduces security vulnerabilities. This option > should therefore ONLY be used for testing purposes. For increased > security, users should add their CA to their system's list of trusted > CAs instead of enabling this option. type: "array" items: type: "string" example: ["::1/128", "127.0.0.0/8"] IndexConfigs: type: "object" additionalProperties: $ref: "#/definitions/IndexInfo" example: "127.0.0.1:5000": "Name": "127.0.0.1:5000" "Mirrors": [] "Secure": false "Official": false "[2001:db8:a0b:12f0::1]:80": "Name": "[2001:db8:a0b:12f0::1]:80" "Mirrors": [] "Secure": false "Official": false "docker.io": Name: "docker.io" Mirrors: ["https://hub-mirror.corp.example.com:5000/"] Secure: true Official: true "registry.internal.corp.example.com:3000": Name: "registry.internal.corp.example.com:3000" Mirrors: [] Secure: false Official: false Mirrors: description: | List of registry URLs that act as a mirror for the official (`docker.io`) registry. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://[2001:db8:a0b:12f0::1]/" IndexInfo: description: IndexInfo contains information about a registry. type: "object" x-nullable: true properties: Name: description: | Name of the registry, such as "docker.io". type: "string" example: "docker.io" Mirrors: description: | List of mirrors, expressed as URIs. type: "array" items: type: "string" example: - "https://hub-mirror.corp.example.com:5000/" - "https://registry-2.docker.io/" - "https://registry-3.docker.io/" Secure: description: | Indicates if the registry is part of the list of insecure registries. If `false`, the registry is insecure. Insecure registries accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from unknown CAs) communication. > **Warning**: Insecure registries can be useful when running a local > registry. However, because its use creates security vulnerabilities > it should ONLY be enabled for testing purposes. For increased > security, users should add their CA to their system's list of > trusted CAs instead of enabling this option. type: "boolean" example: true Official: description: | Indicates whether this is an official registry (i.e., Docker Hub / docker.io) type: "boolean" example: true Runtime: description: | Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec) runtime. The runtime is invoked by the daemon via the `containerd` daemon. OCI runtimes act as an interface to the Linux kernel namespaces, cgroups, and SELinux. type: "object" properties: path: description: | Name and, optional, path, of the OCI executable binary. If the path is omitted, the daemon searches the host's `$PATH` for the binary and uses the first result. type: "string" example: "/usr/local/bin/my-oci-runtime" runtimeArgs: description: | List of command-line arguments to pass to the runtime when invoked. type: "array" x-nullable: true items: type: "string" example: ["--debug", "--systemd-cgroup=false"] Commit: description: | Commit holds the Git-commit (SHA1) that a binary was built from, as reported in the version-string of external tools, such as `containerd`, or `runC`. type: "object" properties: ID: description: "Actual commit ID of external tool." type: "string" example: "cfb82a876ecc11b5ca0977d1733adbe58599088a" Expected: description: | Commit ID of external tool expected by dockerd as set at build time. type: "string" example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4" SwarmInfo: description: | Represents generic information about swarm. type: "object" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" default: "" example: "k67qz4598weg5unwwffg6z1m1" NodeAddr: description: | IP address at which this node can be reached by other nodes in the swarm. type: "string" default: "" example: "10.0.0.46" LocalNodeState: $ref: "#/definitions/LocalNodeState" ControlAvailable: type: "boolean" default: false example: true Error: type: "string" default: "" RemoteManagers: description: | List of ID's and addresses of other managers in the swarm. type: "array" default: null x-nullable: true items: $ref: "#/definitions/PeerNode" example: - NodeID: "71izy0goik036k48jg985xnds" Addr: "10.0.0.158:2377" - NodeID: "79y6h1o4gv8n120drcprv5nmc" Addr: "10.0.0.159:2377" - NodeID: "k67qz4598weg5unwwffg6z1m1" Addr: "10.0.0.46:2377" Nodes: description: "Total number of nodes in the swarm." type: "integer" x-nullable: true example: 4 Managers: description: "Total number of managers in the swarm." type: "integer" x-nullable: true example: 3 Cluster: $ref: "#/definitions/ClusterInfo" LocalNodeState: description: "Current local status of this node." type: "string" default: "" enum: - "" - "inactive" - "pending" - "active" - "error" - "locked" example: "active" PeerNode: description: "Represents a peer-node in the swarm" properties: NodeID: description: "Unique identifier of for this node in the swarm." type: "string" Addr: description: | IP address and ports at which this node can be reached. type: "string" NetworkAttachmentConfig: description: | Specifies how a service should be attached to a particular network. type: "object" properties: Target: description: | The target network for attachment. Must be a network name or ID. type: "string" Aliases: description: | Discoverable alternate names for the service on this network. type: "array" items: type: "string" DriverOpts: description: | Driver attachment options for the network target. type: "object" additionalProperties: type: "string" paths: /containers/json: get: summary: "List containers" description: | Returns a list of containers. For details on the format, see the [inspect endpoint](#operation/ContainerInspect). Note that it uses a different, smaller representation of a container than inspecting a single container. For example, the list of linked containers is not propagated . operationId: "ContainerList" produces: - "application/json" parameters: - name: "all" in: "query" description: | Return all containers. By default, only running containers are shown. type: "boolean" default: false - name: "limit" in: "query" description: | Return this number of most recently created containers, including non-running ones. type: "integer" - name: "size" in: "query" description: | Return the size of container as fields `SizeRw` and `SizeRootFs`. type: "boolean" default: false - name: "filters" in: "query" description: | Filters to process on the container list, encoded as JSON (a `map[string][]string`). For example, `{"status": ["paused"]}` will only return paused containers. Available filters: - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`) - `before`=(`<container id>` or `<container name>`) - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `exited=<int>` containers with exit code of `<int>` - `health`=(`starting`|`healthy`|`unhealthy`|`none`) - `id=<ID>` a container's ID - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only) - `is-task=`(`true`|`false`) - `label=key` or `label="key=value"` of a container label - `name=<name>` a container's name - `network`=(`<network id>` or `<network name>`) - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`) - `since`=(`<container id>` or `<container name>`) - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`) - `volume`=(`<volume name>` or `<mount point destination>`) type: "string" responses: 200: description: "no error" schema: $ref: "#/definitions/ContainerSummary" examples: application/json: - Id: "8dfafdbc3a40" Names: - "/boring_feynman" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 1" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: - PrivatePort: 2222 PublicPort: 3333 Type: "tcp" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:02" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" - Id: "9cd87474be90" Names: - "/coolName" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 222222" Created: 1367854155 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a" Gateway: "172.17.0.1" IPAddress: "172.17.0.8" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:08" Mounts: [] - Id: "3176a2479c92" Names: - "/sleepy_dog" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 3333333333333333" Created: 1367854154 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d" Gateway: "172.17.0.1" IPAddress: "172.17.0.6" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:06" Mounts: [] - Id: "4cb07b47f9fb" Names: - "/running_cat" Image: "ubuntu:latest" ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82" Command: "echo 444444444444444444444444444444444" Created: 1367854152 State: "Exited" Status: "Exit 0" Ports: [] Labels: {} SizeRw: 12288 SizeRootFs: 0 HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9" Gateway: "172.17.0.1" IPAddress: "172.17.0.5" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:11:00:05" Mounts: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/create: post: summary: "Create a container" operationId: "ContainerCreate" consumes: - "application/json" - "application/octet-stream" produces: - "application/json" parameters: - name: "name" in: "query" description: | Assign the specified name to the container. Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`. type: "string" pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$" - name: "body" in: "body" description: "Container to create" schema: allOf: - $ref: "#/definitions/ContainerConfig" - type: "object" properties: HostConfig: $ref: "#/definitions/HostConfig" NetworkingConfig: $ref: "#/definitions/NetworkingConfig" example: Hostname: "" Domainname: "" User: "" AttachStdin: false AttachStdout: true AttachStderr: true Tty: false OpenStdin: false StdinOnce: false Env: - "FOO=bar" - "BAZ=quux" Cmd: - "date" Entrypoint: "" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" Volumes: /volumes/data: {} WorkingDir: "" NetworkDisabled: false MacAddress: "12:34:56:78:9a:bc" ExposedPorts: 22/tcp: {} StopSignal: "SIGTERM" StopTimeout: 10 HostConfig: Binds: - "/tmp:/tmp" Links: - "redis3:redis" Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 NanoCpus: 500000 CpuPercent: 80 CpuShares: 512 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpuQuota: 50000 CpusetCpus: "0,1" CpusetMems: "0,1" MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 300 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceWriteIOps: - {} DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" MemorySwappiness: 60 OomKillDisable: false OomScoreAdj: 500 PidMode: "" PidsLimit: 0 PortBindings: 22/tcp: - HostPort: "11022" PublishAllPorts: false Privileged: false ReadonlyRootfs: false Dns: - "8.8.8.8" DnsOptions: - "" DnsSearch: - "" VolumesFrom: - "parent" - "other:ro" CapAdd: - "NET_ADMIN" CapDrop: - "MKNOD" GroupAdd: - "newgroup" RestartPolicy: Name: "" MaximumRetryCount: 0 AutoRemove: true NetworkMode: "bridge" Devices: [] Ulimits: - {} LogConfig: Type: "json-file" Config: {} SecurityOpt: [] StorageOpt: {} CgroupParent: "" VolumeDriver: "" ShmSize: 67108864 NetworkingConfig: EndpointsConfig: isolated_nw: IPAMConfig: IPv4Address: "172.20.30.33" IPv6Address: "2001:db8:abcd::3033" LinkLocalIPs: - "169.254.34.68" - "fe80::3468" Links: - "container_1" - "container_2" Aliases: - "server_x" - "server_y" required: true responses: 201: description: "Container created successfully" schema: type: "object" title: "ContainerCreateResponse" description: "OK response to ContainerCreate operation" required: [Id, Warnings] properties: Id: description: "The ID of the created container" type: "string" x-nullable: false Warnings: description: "Warnings encountered when creating the container" type: "array" x-nullable: false items: type: "string" examples: application/json: Id: "e90e34656806" Warnings: [] 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /containers/{id}/json: get: summary: "Inspect a container" description: "Return low-level information about a container." operationId: "ContainerInspect" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "ContainerInspectResponse" properties: Id: description: "The ID of the container" type: "string" Created: description: "The time the container was created" type: "string" Path: description: "The path to the command being run" type: "string" Args: description: "The arguments to the command being run" type: "array" items: type: "string" State: x-nullable: true $ref: "#/definitions/ContainerState" Image: description: "The container's image ID" type: "string" ResolvConfPath: type: "string" HostnamePath: type: "string" HostsPath: type: "string" LogPath: type: "string" Name: type: "string" RestartCount: type: "integer" Driver: type: "string" Platform: type: "string" MountLabel: type: "string" ProcessLabel: type: "string" AppArmorProfile: type: "string" ExecIDs: description: "IDs of exec instances that are running in the container." type: "array" items: type: "string" x-nullable: true HostConfig: $ref: "#/definitions/HostConfig" GraphDriver: $ref: "#/definitions/GraphDriverData" SizeRw: description: | The size of files that have been created or changed by this container. type: "integer" format: "int64" SizeRootFs: description: "The total size of all the files in this container." type: "integer" format: "int64" Mounts: type: "array" items: $ref: "#/definitions/MountPoint" Config: $ref: "#/definitions/ContainerConfig" NetworkSettings: $ref: "#/definitions/NetworkSettings" examples: application/json: AppArmorProfile: "" Args: - "-c" - "exit 9" Config: AttachStderr: true AttachStdin: false AttachStdout: true Cmd: - "/bin/sh" - "-c" - "exit 9" Domainname: "" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Healthcheck: Test: ["CMD-SHELL", "exit 0"] Hostname: "ba033ac44011" Image: "ubuntu" Labels: com.example.vendor: "Acme" com.example.license: "GPL" com.example.version: "1.0" MacAddress: "" NetworkDisabled: false OpenStdin: false StdinOnce: false Tty: false User: "" Volumes: /volumes/data: {} WorkingDir: "" StopSignal: "SIGTERM" StopTimeout: 10 Created: "2015-01-06T15:47:31.485331387Z" Driver: "devicemapper" ExecIDs: - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca" - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4" HostConfig: MaximumIOps: 0 MaximumIOBps: 0 BlkioWeight: 0 BlkioWeightDevice: - {} BlkioDeviceReadBps: - {} BlkioDeviceWriteBps: - {} BlkioDeviceReadIOps: - {} BlkioDeviceWriteIOps: - {} ContainerIDFile: "" CpusetCpus: "" CpusetMems: "" CpuPercent: 80 CpuShares: 0 CpuPeriod: 100000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 Devices: [] DeviceRequests: - Driver: "nvidia" Count: -1 DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"] Capabilities: [["gpu", "nvidia", "compute"]] Options: property1: "string" property2: "string" IpcMode: "" LxcConf: [] Memory: 0 MemorySwap: 0 MemoryReservation: 0 KernelMemory: 0 OomKillDisable: false OomScoreAdj: 500 NetworkMode: "bridge" PidMode: "" PortBindings: {} Privileged: false ReadonlyRootfs: false PublishAllPorts: false RestartPolicy: MaximumRetryCount: 2 Name: "on-failure" LogConfig: Type: "json-file" Sysctls: net.ipv4.ip_forward: "1" Ulimits: - {} VolumeDriver: "" ShmSize: 67108864 HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname" HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts" LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log" Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39" Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2" MountLabel: "" Name: "/boring_euclid" NetworkSettings: Bridge: "" SandboxID: "" HairpinMode: false LinkLocalIPv6Address: "" LinkLocalIPv6PrefixLen: 0 SandboxKey: "" EndpointID: "" Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 IPAddress: "" IPPrefixLen: 0 IPv6Gateway: "" MacAddress: "" Networks: bridge: NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812" EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d" Gateway: "172.17.0.1" IPAddress: "172.17.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Path: "/bin/sh" ProcessLabel: "" ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf" RestartCount: 1 State: Error: "" ExitCode: 9 FinishedAt: "2015-01-06T15:47:32.080254511Z" Health: Status: "healthy" FailingStreak: 0 Log: - Start: "2019-12-22T10:59:05.6385933Z" End: "2019-12-22T10:59:05.8078452Z" ExitCode: 0 Output: "" OOMKilled: false Dead: false Paused: false Pid: 0 Restarting: false Running: true StartedAt: "2015-01-06T15:47:32.072697474Z" Status: "running" Mounts: - Name: "fac362...80535" Source: "/data" Destination: "/data" Driver: "local" Mode: "ro,Z" RW: false Propagation: "" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "size" in: "query" type: "boolean" default: false description: "Return the size of container as fields `SizeRw` and `SizeRootFs`" tags: ["Container"] /containers/{id}/top: get: summary: "List processes running inside a container" description: | On Unix systems, this is done by running the `ps` command. This endpoint is not supported on Windows. operationId: "ContainerTop" responses: 200: description: "no error" schema: type: "object" title: "ContainerTopResponse" description: "OK response to ContainerTop operation" properties: Titles: description: "The ps column titles" type: "array" items: type: "string" Processes: description: | Each process running in the container, where each is process is an array of values corresponding to the titles. type: "array" items: type: "array" items: type: "string" examples: application/json: Titles: - "UID" - "PID" - "PPID" - "C" - "STIME" - "TTY" - "TIME" - "CMD" Processes: - - "root" - "13642" - "882" - "0" - "17:03" - "pts/0" - "00:00:00" - "/bin/bash" - - "root" - "13735" - "13642" - "0" - "17:06" - "pts/0" - "00:00:00" - "sleep 10" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "ps_args" in: "query" description: "The arguments to pass to `ps`. For example, `aux`" type: "string" default: "-ef" tags: ["Container"] /containers/{id}/logs: get: summary: "Get container logs" description: | Get `stdout` and `stderr` logs from a container. Note: This endpoint works only for containers with the `json-file` or `journald` logging driver. operationId: "ContainerLogs" responses: 200: description: | logs returned as a stream in response body. For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach). Note that unlike the attach endpoint, the logs endpoint does not upgrade the connection and does not set Content-Type. schema: type: "string" format: "binary" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "until" in: "query" description: "Only return logs before this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Container"] /containers/{id}/changes: get: summary: "Get changes on a container’s filesystem" description: | Returns which files in a container's filesystem have been added, deleted, or modified. The `Kind` of modification can be one of: - `0`: Modified - `1`: Added - `2`: Deleted operationId: "ContainerChanges" produces: ["application/json"] responses: 200: description: "The list of changes" schema: type: "array" items: type: "object" x-go-name: "ContainerChangeResponseItem" title: "ContainerChangeResponseItem" description: "change item in response to ContainerChanges operation" required: [Path, Kind] properties: Path: description: "Path to file that has changed" type: "string" x-nullable: false Kind: description: "Kind of change" type: "integer" format: "uint8" enum: [0, 1, 2] x-nullable: false examples: application/json: - Path: "/dev" Kind: 0 - Path: "/dev/kmsg" Kind: 1 - Path: "/test" Kind: 1 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/export: get: summary: "Export a container" description: "Export the contents of a container as a tarball." operationId: "ContainerExport" produces: - "application/octet-stream" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/stats: get: summary: "Get container stats based on resource usage" description: | This endpoint returns a live stream of a container’s resource usage statistics. The `precpu_stats` is the CPU statistic of the *previous* read, and is used to calculate the CPU usage percentage. It is not an exact copy of the `cpu_stats` field. If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is nil then for compatibility with older daemons the length of the corresponding `cpu_usage.percpu_usage` array should be used. On a cgroup v2 host, the following fields are not set * `blkio_stats`: all fields other than `io_service_bytes_recursive` * `cpu_stats`: `cpu_usage.percpu_usage` * `memory_stats`: `max_usage` and `failcnt` Also, `memory_stats.stats` fields are incompatible with cgroup v1. To calculate the values shown by the `stats` command of the docker cli tool the following formulas can be used: * used_memory = `memory_stats.usage - memory_stats.stats.cache` * available_memory = `memory_stats.limit` * Memory usage % = `(used_memory / available_memory) * 100.0` * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage` * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage` * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus` * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0` operationId: "ContainerStats" produces: ["application/json"] responses: 200: description: "no error" schema: type: "object" examples: application/json: read: "2015-01-08T22:57:31.547920715Z" pids_stats: current: 3 networks: eth0: rx_bytes: 5338 rx_dropped: 0 rx_errors: 0 rx_packets: 36 tx_bytes: 648 tx_dropped: 0 tx_errors: 0 tx_packets: 8 eth5: rx_bytes: 4641 rx_dropped: 0 rx_errors: 0 rx_packets: 26 tx_bytes: 690 tx_dropped: 0 tx_errors: 0 tx_packets: 9 memory_stats: stats: total_pgmajfault: 0 cache: 0 mapped_file: 0 total_inactive_file: 0 pgpgout: 414 rss: 6537216 total_mapped_file: 0 writeback: 0 unevictable: 0 pgpgin: 477 total_unevictable: 0 pgmajfault: 0 total_rss: 6537216 total_rss_huge: 6291456 total_writeback: 0 total_inactive_anon: 0 rss_huge: 6291456 hierarchical_memory_limit: 67108864 total_pgfault: 964 total_active_file: 0 active_anon: 6537216 total_active_anon: 6537216 total_pgpgout: 414 total_cache: 0 inactive_anon: 0 active_file: 0 pgfault: 964 inactive_file: 0 total_pgpgin: 477 max_usage: 6651904 usage: 6537216 failcnt: 0 limit: 67108864 blkio_stats: {} cpu_stats: cpu_usage: percpu_usage: - 8646879 - 24472255 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100215355 usage_in_kernelmode: 30000000 system_cpu_usage: 739306590000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 precpu_stats: cpu_usage: percpu_usage: - 8646879 - 24350896 - 36438778 - 30657443 usage_in_usermode: 50000000 total_usage: 100093996 usage_in_kernelmode: 30000000 system_cpu_usage: 9492140000000 online_cpus: 4 throttling_data: periods: 0 throttled_periods: 0 throttled_time: 0 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "stream" in: "query" description: | Stream the output. If false, the stats will be output once and then it will disconnect. type: "boolean" default: true - name: "one-shot" in: "query" description: | Only get a single stat instead of waiting for 2 cycles. Must be used with `stream=false`. type: "boolean" default: false tags: ["Container"] /containers/{id}/resize: post: summary: "Resize a container TTY" description: "Resize the TTY for a container." operationId: "ContainerResize" consumes: - "application/octet-stream" produces: - "text/plain" responses: 200: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "cannot resize container" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Container"] /containers/{id}/start: post: summary: "Start a container" operationId: "ContainerStart" responses: 204: description: "no error" 304: description: "container already started" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" tags: ["Container"] /containers/{id}/stop: post: summary: "Stop a container" operationId: "ContainerStop" responses: 204: description: "no error" 304: description: "container already stopped" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/restart: post: summary: "Restart a container" operationId: "ContainerRestart" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "t" in: "query" description: "Number of seconds to wait before killing the container" type: "integer" tags: ["Container"] /containers/{id}/kill: post: summary: "Kill a container" description: | Send a POSIX signal to a container, defaulting to killing to the container. operationId: "ContainerKill" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is not running" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "signal" in: "query" description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)" type: "string" default: "SIGKILL" tags: ["Container"] /containers/{id}/update: post: summary: "Update a container" description: | Change various configuration options of a container without having to recreate it. operationId: "ContainerUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "The container has been updated." schema: type: "object" title: "ContainerUpdateResponse" description: "OK response to ContainerUpdate operation" properties: Warnings: type: "array" items: type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "update" in: "body" required: true schema: allOf: - $ref: "#/definitions/Resources" - type: "object" properties: RestartPolicy: $ref: "#/definitions/RestartPolicy" example: BlkioWeight: 300 CpuShares: 512 CpuPeriod: 100000 CpuQuota: 50000 CpuRealtimePeriod: 1000000 CpuRealtimeRuntime: 10000 CpusetCpus: "0,1" CpusetMems: "0" Memory: 314572800 MemorySwap: 514288000 MemoryReservation: 209715200 KernelMemory: 52428800 RestartPolicy: MaximumRetryCount: 4 Name: "on-failure" tags: ["Container"] /containers/{id}/rename: post: summary: "Rename a container" operationId: "ContainerRename" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "name already in use" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "name" in: "query" required: true description: "New name for the container" type: "string" tags: ["Container"] /containers/{id}/pause: post: summary: "Pause a container" description: | Use the freezer cgroup to suspend all processes in a container. Traditionally, when suspending a process the `SIGSTOP` signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. operationId: "ContainerPause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/unpause: post: summary: "Unpause a container" description: "Resume a container which has been paused." operationId: "ContainerUnpause" responses: 204: description: "no error" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" tags: ["Container"] /containers/{id}/attach: post: summary: "Attach to a container" description: | Attach to a container to read its output or send it input. You can attach to the same container multiple times and you can reattach to containers that have been detached. Either the `stream` or `logs` parameter must be `true` for this endpoint to do anything. See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/) for more details. ### Hijacking This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`, and `stderr` on the same socket. This is the response from the daemon for an attach request: ``` HTTP/1.1 200 OK Content-Type: application/vnd.docker.raw-stream [STREAM] ``` After the headers and two new lines, the TCP connection can now be used for raw, bidirectional communication between the client and server. To hint potential proxies about connection hijacking, the Docker client can also optionally send connection upgrade headers. For example, the client sends this request to upgrade the connection: ``` POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1 Upgrade: tcp Connection: Upgrade ``` The Docker daemon will respond with a `101 UPGRADED` response, and will similarly follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Content-Type: application/vnd.docker.raw-stream Connection: Upgrade Upgrade: tcp [STREAM] ``` ### Stream format When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate), the stream over the hijacked connected is multiplexed to separate out `stdout` and `stderr`. The stream consists of a series of frames, each containing a header and a payload. The header contains the information which the stream writes (`stdout` or `stderr`). It also contains the size of the associated frame encoded in the last four bytes (`uint32`). It is encoded on the first eight bytes like this: ```go header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} ``` `STREAM_TYPE` can be: - 0: `stdin` (is written on `stdout`) - 1: `stdout` - 2: `stderr` `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size encoded as big endian. Following the header is the payload, which is the specified number of bytes of `STREAM_TYPE`. The simplest way to implement this protocol is the following: 1. Read 8 bytes. 2. Choose `stdout` or `stderr` depending on the first byte. 3. Extract the frame size from the last four bytes. 4. Read the extracted size and output it on the correct output. 5. Goto 1. ### Stream format when using a TTY When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate), the stream is not multiplexed. The data exchanged over the hijacked connection is simply the raw data from the process PTY and client's `stdin`. operationId: "ContainerAttach" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. type: "string" - name: "logs" in: "query" description: | Replay previous logs from the container. This is useful for attaching to a container that has started and you want to output everything since the container started. If `stream` is also enabled, once all the previous output has been returned, it will seamlessly transition into streaming current output. type: "boolean" default: false - name: "stream" in: "query" description: | Stream attached streams from the time the request was made onwards. type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/attach/ws: get: summary: "Attach to a container via a websocket" operationId: "ContainerAttachWebsocket" responses: 101: description: "no error, hints proxy about hijacking" 200: description: "no error, no upgrade header found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "detachKeys" in: "query" description: | Override the key sequence for detaching a container.Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,`, or `_`. type: "string" - name: "logs" in: "query" description: "Return logs" type: "boolean" default: false - name: "stream" in: "query" description: "Return stream" type: "boolean" default: false - name: "stdin" in: "query" description: "Attach to `stdin`" type: "boolean" default: false - name: "stdout" in: "query" description: "Attach to `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Attach to `stderr`" type: "boolean" default: false tags: ["Container"] /containers/{id}/wait: post: summary: "Wait for a container" description: "Block until a container stops, then returns the exit code." operationId: "ContainerWait" produces: ["application/json"] responses: 200: description: "The container has exit." schema: type: "object" title: "ContainerWaitResponse" description: "OK response to ContainerWait operation" required: [StatusCode] properties: StatusCode: description: "Exit code of the container" type: "integer" x-nullable: false Error: description: "container waiting error, if any" type: "object" properties: Message: description: "Details of an error" type: "string" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "condition" in: "query" description: | Wait until a container state reaches the given condition, either 'not-running' (default), 'next-exit', or 'removed'. type: "string" default: "not-running" tags: ["Container"] /containers/{id}: delete: summary: "Remove a container" operationId: "ContainerDelete" responses: 204: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "conflict" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: | You cannot remove a running container: c2ada9df5af8. Stop the container before attempting removal or force remove 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "v" in: "query" description: "Remove anonymous volumes associated with the container." type: "boolean" default: false - name: "force" in: "query" description: "If the container is running, kill it before removing it." type: "boolean" default: false - name: "link" in: "query" description: "Remove the specified link associated with the container." type: "boolean" default: false tags: ["Container"] /containers/{id}/archive: head: summary: "Get information about files in a container" description: | A response header `X-Docker-Container-Path-Stat` is returned, containing a base64 - encoded JSON object with some filesystem header information about the path. operationId: "ContainerArchiveInfo" responses: 200: description: "no error" headers: X-Docker-Container-Path-Stat: type: "string" description: | A base64 - encoded JSON object with some filesystem header information about the path 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] get: summary: "Get an archive of a filesystem resource in a container" description: "Get a tar archive of a resource in the filesystem of container id." operationId: "ContainerArchive" produces: ["application/x-tar"] responses: 200: description: "no error" 400: description: "Bad parameter" schema: allOf: - $ref: "#/definitions/ErrorResponse" - type: "object" properties: message: description: | The error message. Either "must specify path parameter" (path cannot be empty) or "not a directory" (path was asserted to be a directory but exists as a file). type: "string" x-nullable: false 404: description: "Container or path does not exist" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Resource in the container’s filesystem to archive." type: "string" tags: ["Container"] put: summary: "Extract an archive of files or folders to a directory in a container" description: "Upload a tar archive to be extracted to a path in the filesystem of container id." operationId: "PutContainerArchive" consumes: ["application/x-tar", "application/octet-stream"] responses: 200: description: "The content was extracted successfully" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "Permission denied, the volume or container rootfs is marked as read-only." schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such container or path does not exist inside the container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the container" type: "string" - name: "path" in: "query" required: true description: "Path to a directory in the container to extract the archive’s contents into. " type: "string" - name: "noOverwriteDirNonDir" in: "query" description: | If `1`, `true`, or `True` then it will be an error if unpacking the given content would cause an existing directory to be replaced with a non-directory and vice versa. type: "string" - name: "copyUIDGID" in: "query" description: | If `1`, `true`, then it will copy UID/GID maps to the dest file or dir type: "string" - name: "inputStream" in: "body" required: true description: | The input stream must be a tar archive compressed with one of the following algorithms: `identity` (no compression), `gzip`, `bzip2`, or `xz`. schema: type: "string" format: "binary" tags: ["Container"] /containers/prune: post: summary: "Delete stopped containers" produces: - "application/json" operationId: "ContainerPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ContainerPruneResponse" properties: ContainersDeleted: description: "Container IDs that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Container"] /images/json: get: summary: "List Images" description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image." operationId: "ImageList" produces: - "application/json" responses: 200: description: "Summary image data for the images matching the query" schema: type: "array" items: $ref: "#/definitions/ImageSummary" examples: application/json: - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8" ParentId: "" RepoTags: - "ubuntu:12.04" - "ubuntu:precise" RepoDigests: - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787" Created: 1474925151 Size: 103579269 VirtualSize: 103579269 SharedSize: 0 Labels: {} Containers: 2 - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175" ParentId: "" RepoTags: - "ubuntu:12.10" - "ubuntu:quantal" RepoDigests: - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7" - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3" Created: 1403128455 Size: 172064416 VirtualSize: 172064416 SharedSize: 0 Labels: {} Containers: 5 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "all" in: "query" description: "Show all images. Only images from a final layer (no children) are shown by default." type: "boolean" default: false - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - `dangling=true` - `label=key` or `label="key=value"` of an image label - `reference`=(`<image-name>[:<tag>]`) - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) type: "string" - name: "digests" in: "query" description: "Show digest information as a `RepoDigests` field on each image." type: "boolean" default: false tags: ["Image"] /build: post: summary: "Build an image" description: | Build an image from a tar archive with a `Dockerfile` in it. The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/). The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output. The build is canceled if the client drops the connection by quitting or being killed. operationId: "ImageBuild" consumes: - "application/octet-stream" produces: - "application/json" parameters: - name: "inputStream" in: "body" description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz." schema: type: "string" format: "binary" - name: "dockerfile" in: "query" description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`." type: "string" default: "Dockerfile" - name: "t" in: "query" description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters." type: "string" - name: "extrahosts" in: "query" description: "Extra hosts to add to /etc/hosts" type: "string" - name: "remote" in: "query" description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball." type: "string" - name: "q" in: "query" description: "Suppress verbose build output." type: "boolean" default: false - name: "nocache" in: "query" description: "Do not use the cache when building the image." type: "boolean" default: false - name: "cachefrom" in: "query" description: "JSON array of images used for build cache resolution." type: "string" - name: "pull" in: "query" description: "Attempt to pull the image even if an older image exists locally." type: "string" - name: "rm" in: "query" description: "Remove intermediate containers after a successful build." type: "boolean" default: true - name: "forcerm" in: "query" description: "Always remove intermediate containers, even upon failure." type: "boolean" default: false - name: "memory" in: "query" description: "Set memory limit for build." type: "integer" - name: "memswap" in: "query" description: "Total memory (memory + swap). Set as `-1` to disable swap." type: "integer" - name: "cpushares" in: "query" description: "CPU shares (relative weight)." type: "integer" - name: "cpusetcpus" in: "query" description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)." type: "string" - name: "cpuperiod" in: "query" description: "The length of a CPU period in microseconds." type: "integer" - name: "cpuquota" in: "query" description: "Microseconds of CPU time that the container can get in a CPU period." type: "integer" - name: "buildargs" in: "query" description: > JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker uses the buildargs as the environment context for commands run via the `Dockerfile` RUN instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for passing secret values. For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded. [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg) type: "string" - name: "shmsize" in: "query" description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB." type: "integer" - name: "squash" in: "query" description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*" type: "boolean" - name: "labels" in: "query" description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs." type: "string" - name: "networkmode" in: "query" description: | Sets the networking mode for the run commands during build. Supported standard values are: `bridge`, `host`, `none`, and `container:<name|id>`. Any other value is taken as a custom network's name or ID to which this container should connect to. type: "string" - name: "Content-type" in: "header" type: "string" enum: - "application/x-tar" default: "application/x-tar" - name: "X-Registry-Config" in: "header" description: | This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to. The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example: ``` { "docker.example.com": { "username": "janedoe", "password": "hunter2" }, "https://index.docker.io/v1/": { "username": "mobydock", "password": "conta1n3rize14" } } ``` Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" - name: "target" in: "query" description: "Target build stage" type: "string" default: "" - name: "outputs" in: "query" description: "BuildKit output configuration" type: "string" default: "" responses: 200: description: "no error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /build/prune: post: summary: "Delete builder cache" produces: - "application/json" operationId: "BuildPrune" parameters: - name: "keep-storage" in: "query" description: "Amount of disk space in bytes to keep for cache" type: "integer" format: "int64" - name: "all" in: "query" type: "boolean" description: "Remove all types of build cache" - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the list of build cache objects. Available filters: - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h') - `id=<id>` - `parent=<id>` - `type=<string>` - `description=<string>` - `inuse` - `shared` - `private` responses: 200: description: "No error" schema: type: "object" title: "BuildPruneResponse" properties: CachesDeleted: type: "array" items: description: "ID of build cache object" type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /images/create: post: summary: "Create an image" description: "Create an image by either pulling it from a registry or importing it." operationId: "ImageCreate" consumes: - "text/plain" - "application/octet-stream" produces: - "application/json" responses: 200: description: "no error" 404: description: "repository does not exist or no read access" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "fromImage" in: "query" description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed." type: "string" - name: "fromSrc" in: "query" description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image." type: "string" - name: "repo" in: "query" description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image." type: "string" - name: "tag" in: "query" description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled." type: "string" - name: "message" in: "query" description: "Set commit message for imported image." type: "string" - name: "inputImage" in: "body" description: "Image content if the value `-` has been specified in fromSrc query parameter" schema: type: "string" required: false - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "platform" in: "query" description: "Platform in the format os[/arch[/variant]]" type: "string" default: "" tags: ["Image"] /images/{name}/json: get: summary: "Inspect an image" description: "Return low-level information about an image." operationId: "ImageInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Image" examples: application/json: Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c" Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a" Comment: "" Os: "linux" Architecture: "amd64" Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" ContainerConfig: Tty: false Hostname: "e611e15f9c9d" Domainname: "" AttachStdout: false PublishService: "" AttachStdin: false OpenStdin: false StdinOnce: false NetworkDisabled: false OnBuild: [] Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" User: "" WorkingDir: "" MacAddress: "" AttachStderr: false Labels: com.example.license: "GPL" com.example.version: "1.0" com.example.vendor: "Acme" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Cmd: - "/bin/sh" - "-c" - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0" DockerVersion: "1.9.0-dev" VirtualSize: 188359297 Size: 0 Author: "" Created: "2015-09-10T08:30:53.26995814Z" GraphDriver: Name: "aufs" Data: {} RepoDigests: - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" RepoTags: - "example:1.0" - "example:latest" - "example:stable" Config: Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c" NetworkDisabled: false OnBuild: [] StdinOnce: false PublishService: "" AttachStdin: false OpenStdin: false Domainname: "" AttachStdout: false Tty: false Hostname: "e611e15f9c9d" Cmd: - "/bin/bash" Env: - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" Labels: com.example.vendor: "Acme" com.example.version: "1.0" com.example.license: "GPL" MacAddress: "" AttachStderr: false WorkingDir: "" User: "" RootFS: Type: "layers" Layers: - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6" - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Image"] /images/{name}/history: get: summary: "Get the history of an image" description: "Return parent layers of an image." operationId: "ImageHistory" produces: ["application/json"] responses: 200: description: "List of image layers" schema: type: "array" items: type: "object" x-go-name: HistoryResponseItem title: "HistoryResponseItem" description: "individual image layer information in response to ImageHistory operation" required: [Id, Created, CreatedBy, Tags, Size, Comment] properties: Id: type: "string" x-nullable: false Created: type: "integer" format: "int64" x-nullable: false CreatedBy: type: "string" x-nullable: false Tags: type: "array" items: type: "string" Size: type: "integer" format: "int64" x-nullable: false Comment: type: "string" x-nullable: false examples: application/json: - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710" Created: 1398108230 CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /" Tags: - "ubuntu:lucid" - "ubuntu:10.04" Size: 182964289 Comment: "" - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8" Created: 1398108222 CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <[email protected]> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/" Tags: [] Size: 0 Comment: "" - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158" Created: 1371157430 CreatedBy: "" Tags: - "scratch12:latest" - "scratch:latest" Size: 0 Comment: "Imported from -" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/{name}/push: post: summary: "Push an image" description: | Push an image to a registry. If you wish to push an image on to a private registry, that image must already have a tag which references the registry. For example, `registry.example.com/myimage:latest`. The push is cancelled if the HTTP connection is closed. operationId: "ImagePush" consumes: - "application/octet-stream" responses: 200: description: "No error" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID." type: "string" required: true - name: "tag" in: "query" description: "The tag to associate with the image on the registry." type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration. Refer to the [authentication section](#section/Authentication) for details. type: "string" required: true tags: ["Image"] /images/{name}/tag: post: summary: "Tag an image" description: "Tag an image so that it becomes part of a repository." operationId: "ImageTag" responses: 201: description: "No error" 400: description: "Bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID to tag." type: "string" required: true - name: "repo" in: "query" description: "The repository to tag in. For example, `someuser/someimage`." type: "string" - name: "tag" in: "query" description: "The name of the new tag." type: "string" tags: ["Image"] /images/{name}: delete: summary: "Remove an image" description: | Remove an image, along with any untagged parent images that were referenced by that image. Images can't be removed if they have descendant images, are being used by a running container or are being used by a build. operationId: "ImageDelete" produces: ["application/json"] responses: 200: description: "The image was deleted successfully" schema: type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" examples: application/json: - Untagged: "3e2f21a89f" - Deleted: "3e2f21a89f" - Deleted: "53b4f83ac9" 404: description: "No such image" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Conflict" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true - name: "force" in: "query" description: "Remove the image even if it is being used by stopped containers or has other tags" type: "boolean" default: false - name: "noprune" in: "query" description: "Do not delete untagged parent images" type: "boolean" default: false tags: ["Image"] /images/search: get: summary: "Search images" description: "Search for an image on Docker Hub." operationId: "ImageSearch" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: type: "object" title: "ImageSearchResponseItem" properties: description: type: "string" is_official: type: "boolean" is_automated: type: "boolean" name: type: "string" star_count: type: "integer" examples: application/json: - description: "" is_official: false is_automated: false name: "wma55/u1210sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "jdswinbank/sshd" star_count: 0 - description: "" is_official: false is_automated: false name: "vgauthier/sshd" star_count: 0 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "term" in: "query" description: "Term to search" type: "string" required: true - name: "limit" in: "query" description: "Maximum number of results to return" type: "integer" - name: "filters" in: "query" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters: - `is-automated=(true|false)` - `is-official=(true|false)` - `stars=<number>` Matches images that has at least 'number' stars. type: "string" tags: ["Image"] /images/prune: post: summary: "Delete unused images" produces: - "application/json" operationId: "ImagePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `dangling=<boolean>` When set to `true` (or `1`), prune only unused *and* untagged images. When set to `false` (or `0`), all unused images are pruned. - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "ImagePruneResponse" properties: ImagesDeleted: description: "Images that were deleted" type: "array" items: $ref: "#/definitions/ImageDeleteResponseItem" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Image"] /auth: post: summary: "Check auth configuration" description: | Validate credentials for a registry and, if available, get an identity token for accessing the registry without password. operationId: "SystemAuth" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "An identity token was generated successfully." schema: type: "object" title: "SystemAuthResponse" required: [Status] properties: Status: description: "The status of the authentication" type: "string" x-nullable: false IdentityToken: description: "An opaque token used to authenticate a user after a successful login" type: "string" x-nullable: false examples: application/json: Status: "Login Succeeded" IdentityToken: "9cbaf023786cd7..." 204: description: "No error" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "authConfig" in: "body" description: "Authentication to check" schema: $ref: "#/definitions/AuthConfig" tags: ["System"] /info: get: summary: "Get system information" operationId: "SystemInfo" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/SystemInfo" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /version: get: summary: "Get version" description: "Returns the version of Docker that is running and various information about the system that Docker is running on." operationId: "SystemVersion" produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/SystemVersion" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /_ping: get: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPing" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "OK" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" headers: Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" tags: ["System"] head: summary: "Ping" description: "This is a dummy endpoint you can use to test if the server is accessible." operationId: "SystemPingHead" produces: ["text/plain"] responses: 200: description: "no error" schema: type: "string" example: "(empty)" headers: API-Version: type: "string" description: "Max API Version the server supports" Builder-Version: type: "string" description: "Default version of docker image builder" Docker-Experimental: type: "boolean" description: "If the server is running with experimental mode enabled" Cache-Control: type: "string" default: "no-cache, no-store, must-revalidate" Pragma: type: "string" default: "no-cache" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /commit: post: summary: "Create a new image from a container" operationId: "ImageCommit" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "containerConfig" in: "body" description: "The container configuration" schema: $ref: "#/definitions/ContainerConfig" - name: "container" in: "query" description: "The ID or name of the container to commit" type: "string" - name: "repo" in: "query" description: "Repository name for the created image" type: "string" - name: "tag" in: "query" description: "Tag name for the create image" type: "string" - name: "comment" in: "query" description: "Commit message" type: "string" - name: "author" in: "query" description: "Author of the image (e.g., `John Hannibal Smith <[email protected]>`)" type: "string" - name: "pause" in: "query" description: "Whether to pause the container before committing" type: "boolean" default: true - name: "changes" in: "query" description: "`Dockerfile` instructions to apply while committing" type: "string" tags: ["Image"] /events: get: summary: "Monitor events" description: | Stream real-time events from the server. Various objects within Docker report events when something happens to them. Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune` Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune` Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune` Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune` The Docker daemon reports these events: `reload` Services report these events: `create`, `update`, and `remove` Nodes report these events: `create`, `update`, and `remove` Secrets report these events: `create`, `update`, and `remove` Configs report these events: `create`, `update`, and `remove` The Builder reports `prune` events operationId: "SystemEvents" produces: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "SystemEventsResponse" properties: Type: description: "The type of object emitting the event" type: "string" Action: description: "The type of event" type: "string" Actor: type: "object" properties: ID: description: "The ID of the object emitting the event" type: "string" Attributes: description: "Various key/value attributes of the object, depending on its type" type: "object" additionalProperties: type: "string" time: description: "Timestamp of event" type: "integer" timeNano: description: "Timestamp of event, with nanosecond accuracy" type: "integer" format: "int64" examples: application/json: Type: "container" Action: "create" Actor: ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743" Attributes: com.example.some-label: "some-label-value" image: "alpine" name: "my-container" time: 1461943101 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "since" in: "query" description: "Show events created since this timestamp then stream new events." type: "string" - name: "until" in: "query" description: "Show events created until this timestamp then stop streaming." type: "string" - name: "filters" in: "query" description: | A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters: - `config=<string>` config name or ID - `container=<string>` container name or ID - `daemon=<string>` daemon name or ID - `event=<string>` event type - `image=<string>` image name or ID - `label=<string>` image or container label - `network=<string>` network name or ID - `node=<string>` node ID - `plugin`=<string> plugin name or ID - `scope`=<string> local or swarm - `secret=<string>` secret name or ID - `service=<string>` service name or ID - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config` - `volume=<string>` volume name type: "string" tags: ["System"] /system/df: get: summary: "Get data usage information" operationId: "SystemDataUsage" responses: 200: description: "no error" schema: type: "object" title: "SystemDataUsageResponse" properties: LayersSize: type: "integer" format: "int64" Images: type: "array" items: $ref: "#/definitions/ImageSummary" Containers: type: "array" items: $ref: "#/definitions/ContainerSummary" Volumes: type: "array" items: $ref: "#/definitions/Volume" BuildCache: type: "array" items: $ref: "#/definitions/BuildCache" example: LayersSize: 1092588 Images: - Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" ParentId: "" RepoTags: - "busybox:latest" RepoDigests: - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6" Created: 1466724217 Size: 1092588 SharedSize: 0 VirtualSize: 1092588 Labels: {} Containers: 1 Containers: - Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148" Names: - "/top" Image: "busybox" ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749" Command: "top" Created: 1472592424 Ports: [] SizeRootFs: 1092588 Labels: {} State: "exited" Status: "Exited (0) 56 minutes ago" HostConfig: NetworkMode: "default" NetworkSettings: Networks: bridge: IPAMConfig: null Links: null Aliases: null NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92" EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a" Gateway: "172.18.0.1" IPAddress: "172.18.0.2" IPPrefixLen: 16 IPv6Gateway: "" GlobalIPv6Address: "" GlobalIPv6PrefixLen: 0 MacAddress: "02:42:ac:12:00:02" Mounts: [] Volumes: - Name: "my-volume" Driver: "local" Mountpoint: "/var/lib/docker/volumes/my-volume/_data" Labels: null Scope: "local" Options: null UsageData: Size: 10920104 RefCount: 2 BuildCache: - ID: "hw53o5aio51xtltp5xjp8v7fx" Parent: "" Type: "regular" Description: "pulled from docker.io/library/debian@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0" InUse: false Shared: true Size: 0 CreatedAt: "2021-06-28T13:31:01.474619385Z" LastUsedAt: "2021-07-07T22:02:32.738075951Z" UsageCount: 26 - ID: "ndlpt0hhvkqcdfkputsk4cq9c" Parent: "hw53o5aio51xtltp5xjp8v7fx" Type: "regular" Description: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache" InUse: false Shared: true Size: 51 CreatedAt: "2021-06-28T13:31:03.002625487Z" LastUsedAt: "2021-07-07T22:02:32.773909517Z" UsageCount: 26 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["System"] /images/{name}/get: get: summary: "Export an image" description: | Get a tarball containing all images and metadata for a repository. If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced. ### Image tarball format An image tarball contains one directory per image layer (named using its long ID), each containing these files: - `VERSION`: currently `1.0` - the file format version - `json`: detailed layer information, similar to `docker inspect layer_id` - `layer.tar`: A tarfile containing the filesystem changes in this layer The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions. If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs. ```json { "hello-world": { "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1" } } ``` operationId: "ImageGet" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or ID" type: "string" required: true tags: ["Image"] /images/get: get: summary: "Export several images" description: | Get a tarball containing all images and metadata for several image repositories. For each value of the `names` parameter: if it is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned; if it is an image ID, similarly only that image (and its parents) are returned and there would be no names referenced in the 'repositories' file for this image ID. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageGetAll" produces: - "application/x-tar" responses: 200: description: "no error" schema: type: "string" format: "binary" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "names" in: "query" description: "Image names to filter by" type: "array" items: type: "string" tags: ["Image"] /images/load: post: summary: "Import images" description: | Load a set of images and tags into a repository. For details on the format, see the [export image endpoint](#operation/ImageGet). operationId: "ImageLoad" consumes: - "application/x-tar" produces: - "application/json" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "imagesTarball" in: "body" description: "Tar archive containing images" schema: type: "string" format: "binary" - name: "quiet" in: "query" description: "Suppress progress details during load." type: "boolean" default: false tags: ["Image"] /containers/{id}/exec: post: summary: "Create an exec instance" description: "Run a command inside a running container." operationId: "ContainerExec" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 404: description: "no such container" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such container: c2ada9df5af8" 409: description: "container is paused" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execConfig" in: "body" description: "Exec configuration" schema: type: "object" properties: AttachStdin: type: "boolean" description: "Attach to `stdin` of the exec command." AttachStdout: type: "boolean" description: "Attach to `stdout` of the exec command." AttachStderr: type: "boolean" description: "Attach to `stderr` of the exec command." DetachKeys: type: "string" description: | Override the key sequence for detaching a container. Format is a single character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`. Tty: type: "boolean" description: "Allocate a pseudo-TTY." Env: description: | A list of environment variables in the form `["VAR=value", ...]`. type: "array" items: type: "string" Cmd: type: "array" description: "Command to run, as a string or array of strings." items: type: "string" Privileged: type: "boolean" description: "Runs the exec process with extended privileges." default: false User: type: "string" description: | The user, and optionally, group to run the exec process inside the container. Format is one of: `user`, `user:group`, `uid`, or `uid:gid`. WorkingDir: type: "string" description: | The working directory for the exec process inside the container. example: AttachStdin: false AttachStdout: true AttachStderr: true DetachKeys: "ctrl-p,ctrl-q" Tty: false Cmd: - "date" Env: - "FOO=bar" - "BAZ=quux" required: true - name: "id" in: "path" description: "ID or name of container" type: "string" required: true tags: ["Exec"] /exec/{id}/start: post: summary: "Start an exec instance" description: | Starts a previously set up exec instance. If detach is true, this endpoint returns immediately after starting the command. Otherwise, it sets up an interactive session with the command. operationId: "ExecStart" consumes: - "application/json" produces: - "application/vnd.docker.raw-stream" responses: 200: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Container is stopped or paused" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "execStartConfig" in: "body" schema: type: "object" properties: Detach: type: "boolean" description: "Detach from the command." Tty: type: "boolean" description: "Allocate a pseudo-TTY." example: Detach: false Tty: false - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /exec/{id}/resize: post: summary: "Resize an exec instance" description: | Resize the TTY session used by an exec instance. This endpoint only works if `tty` was specified as part of creating and starting the exec instance. operationId: "ExecResize" responses: 201: description: "No error" 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" - name: "h" in: "query" description: "Height of the TTY session in characters" type: "integer" - name: "w" in: "query" description: "Width of the TTY session in characters" type: "integer" tags: ["Exec"] /exec/{id}/json: get: summary: "Inspect an exec instance" description: "Return low-level information about an exec instance." operationId: "ExecInspect" produces: - "application/json" responses: 200: description: "No error" schema: type: "object" title: "ExecInspectResponse" properties: CanRemove: type: "boolean" DetachKeys: type: "string" ID: type: "string" Running: type: "boolean" ExitCode: type: "integer" ProcessConfig: $ref: "#/definitions/ProcessConfig" OpenStdin: type: "boolean" OpenStderr: type: "boolean" OpenStdout: type: "boolean" ContainerID: type: "string" Pid: type: "integer" description: "The system process ID for the exec process." examples: application/json: CanRemove: false ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126" DetachKeys: "" ExitCode: 2 ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b" OpenStderr: true OpenStdin: true OpenStdout: true ProcessConfig: arguments: - "-c" - "exit 2" entrypoint: "sh" privileged: false tty: true user: "1000" Running: false Pid: 42000 404: description: "No such exec instance" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Exec instance ID" required: true type: "string" tags: ["Exec"] /volumes: get: summary: "List volumes" operationId: "VolumeList" produces: ["application/json"] responses: 200: description: "Summary volume data that matches the query" schema: type: "object" title: "VolumeListResponse" description: "Volume list response" required: [Volumes, Warnings] properties: Volumes: type: "array" x-nullable: false description: "List of volumes" items: $ref: "#/definitions/Volume" Warnings: type: "array" x-nullable: false description: | Warnings that occurred when fetching the list of volumes. items: type: "string" examples: application/json: Volumes: - CreatedAt: "2017-07-19T12:00:26Z" Name: "tardis" Driver: "local" Mountpoint: "/var/lib/docker/volumes/tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Scope: "local" Options: device: "tmpfs" o: "size=100m,uid=1000" type: "tmpfs" Warnings: [] 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all volumes that are not in use by a container. When set to `false` (or `0`), only volumes that are in use by one or more containers are returned. - `driver=<volume-driver-name>` Matches volumes based on their driver. - `label=<key>` or `label=<key>:<value>` Matches volumes based on the presence of a `label` alone or a `label` and a value. - `name=<volume-name>` Matches all or part of a volume name. type: "string" format: "json" tags: ["Volume"] /volumes/create: post: summary: "Create a volume" operationId: "VolumeCreate" consumes: ["application/json"] produces: ["application/json"] responses: 201: description: "The volume was created successfully" schema: $ref: "#/definitions/Volume" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "volumeConfig" in: "body" required: true description: "Volume configuration" schema: type: "object" description: "Volume configuration" title: "VolumeConfig" properties: Name: description: | The new volume's name. If not specified, Docker generates a name. type: "string" x-nullable: false Driver: description: "Name of the volume driver to use." type: "string" default: "local" x-nullable: false DriverOpts: description: | A mapping of driver options and values. These options are passed directly to the driver and are driver specific. type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "tardis" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" Driver: "custom" tags: ["Volume"] /volumes/{name}: get: summary: "Inspect a volume" operationId: "VolumeInspect" produces: ["application/json"] responses: 200: description: "No error" schema: $ref: "#/definitions/Volume" 404: description: "No such volume" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" tags: ["Volume"] delete: summary: "Remove a volume" description: "Instruct the driver to remove the volume." operationId: "VolumeDelete" responses: 204: description: "The volume was removed" 404: description: "No such volume or volume driver" schema: $ref: "#/definitions/ErrorResponse" 409: description: "Volume is in use and cannot be removed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" required: true description: "Volume name or ID" type: "string" - name: "force" in: "query" description: "Force the removal of the volume" type: "boolean" default: false tags: ["Volume"] /volumes/prune: post: summary: "Delete unused volumes" produces: - "application/json" operationId: "VolumePrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "VolumePruneResponse" properties: VolumesDeleted: description: "Volumes that were deleted" type: "array" items: type: "string" SpaceReclaimed: description: "Disk space reclaimed in bytes" type: "integer" format: "int64" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Volume"] /networks: get: summary: "List networks" description: | Returns a list of networks. For details on the format, see the [network inspect endpoint](#operation/NetworkInspect). Note that it uses a different, smaller representation of a network than inspecting a single network. For example, the list of containers attached to the network is not propagated in API versions 1.28 and up. operationId: "NetworkList" produces: - "application/json" responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Network" examples: application/json: - Name: "bridge" Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566" Created: "2016-10-19T06:21:00.416543526Z" Scope: "local" Driver: "bridge" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: - Subnet: "172.17.0.0/16" Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" - Name: "none" Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "null" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} - Name: "host" Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e" Created: "0001-01-01T00:00:00Z" Scope: "local" Driver: "host" EnableIPv6: false Internal: false Attachable: false Ingress: false IPAM: Driver: "default" Config: [] Containers: {} Options: {} 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | JSON encoded value of the filters (a `map[string][]string`) to process on the networks list. Available filters: - `dangling=<boolean>` When set to `true` (or `1`), returns all networks that are not in use by a container. When set to `false` (or `0`), only networks that are in use by one or more containers are returned. - `driver=<driver-name>` Matches a network's driver. - `id=<network-id>` Matches all or part of a network ID. - `label=<key>` or `label=<key>=<value>` of a network label. - `name=<network-name>` Matches all or part of a network name. - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`). - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks. type: "string" tags: ["Network"] /networks/{id}: get: summary: "Inspect a network" operationId: "NetworkInspect" produces: - "application/json" responses: 200: description: "No error" schema: $ref: "#/definitions/Network" 404: description: "Network not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "verbose" in: "query" description: "Detailed inspect output for troubleshooting" type: "boolean" default: false - name: "scope" in: "query" description: "Filter the network by scope (swarm, global, or local)" type: "string" tags: ["Network"] delete: summary: "Remove a network" operationId: "NetworkDelete" responses: 204: description: "No error" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such network" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" tags: ["Network"] /networks/create: post: summary: "Create a network" operationId: "NetworkCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "No error" schema: type: "object" title: "NetworkCreateResponse" properties: Id: description: "The ID of the created network." type: "string" Warning: type: "string" example: Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30" Warning: "" 403: description: "operation not supported for pre-defined networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "plugin not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "networkConfig" in: "body" description: "Network configuration" required: true schema: type: "object" required: ["Name"] properties: Name: description: "The network's name." type: "string" CheckDuplicate: description: | Check for networks with duplicate names. Since Network is primarily keyed based on a random ID and not on the name, and network name is strictly a user-friendly alias to the network which is uniquely identified using ID, there is no guaranteed way to check for duplicates. CheckDuplicate is there to provide a best effort checking of any networks which has the same name but it is not guaranteed to catch all name collisions. type: "boolean" Driver: description: "Name of the network driver plugin to use." type: "string" default: "bridge" Internal: description: "Restrict external access to the network." type: "boolean" Attachable: description: | Globally scoped network is manually attachable by regular containers from workers in swarm mode. type: "boolean" Ingress: description: | Ingress network is the network which provides the routing-mesh in swarm mode. type: "boolean" IPAM: description: "Optional custom IP scheme for the network." $ref: "#/definitions/IPAM" EnableIPv6: description: "Enable IPv6 on the network." type: "boolean" Options: description: "Network specific options to be used by the drivers." type: "object" additionalProperties: type: "string" Labels: description: "User-defined key/value metadata." type: "object" additionalProperties: type: "string" example: Name: "isolated_nw" CheckDuplicate: false Driver: "bridge" EnableIPv6: true IPAM: Driver: "default" Config: - Subnet: "172.20.0.0/16" IPRange: "172.20.10.0/24" Gateway: "172.20.10.11" - Subnet: "2001:db8:abcd::/64" Gateway: "2001:db8:abcd::1011" Options: foo: "bar" Internal: true Attachable: false Ingress: false Options: com.docker.network.bridge.default_bridge: "true" com.docker.network.bridge.enable_icc: "true" com.docker.network.bridge.enable_ip_masquerade: "true" com.docker.network.bridge.host_binding_ipv4: "0.0.0.0" com.docker.network.bridge.name: "docker0" com.docker.network.driver.mtu: "1500" Labels: com.example.some-label: "some-value" com.example.some-other-label: "some-other-value" tags: ["Network"] /networks/{id}/connect: post: summary: "Connect a container to a network" operationId: "NetworkConnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: "The ID or name of the container to connect to the network." EndpointConfig: $ref: "#/definitions/EndpointSettings" example: Container: "3613f73ba0e4" EndpointConfig: IPAMConfig: IPv4Address: "172.24.56.89" IPv6Address: "2001:db8::5689" tags: ["Network"] /networks/{id}/disconnect: post: summary: "Disconnect a container from a network" operationId: "NetworkDisconnect" consumes: - "application/json" responses: 200: description: "No error" 403: description: "Operation not supported for swarm scoped networks" schema: $ref: "#/definitions/ErrorResponse" 404: description: "Network or container not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "Network ID or name" required: true type: "string" - name: "container" in: "body" required: true schema: type: "object" properties: Container: type: "string" description: | The ID or name of the container to disconnect from the network. Force: type: "boolean" description: | Force the container to disconnect from the network. tags: ["Network"] /networks/prune: post: summary: "Delete unused networks" produces: - "application/json" operationId: "NetworkPrune" parameters: - name: "filters" in: "query" description: | Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters: - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time. - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels. type: "string" responses: 200: description: "No error" schema: type: "object" title: "NetworkPruneResponse" properties: NetworksDeleted: description: "Networks that were deleted" type: "array" items: type: "string" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Network"] /plugins: get: summary: "List plugins" operationId: "PluginList" description: "Returns information about installed plugins." produces: ["application/json"] responses: 200: description: "No error" schema: type: "array" items: $ref: "#/definitions/Plugin" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the plugin list. Available filters: - `capability=<capability name>` - `enable=<true>|<false>` tags: ["Plugin"] /plugins/privileges: get: summary: "Get plugin privileges" operationId: "GetPluginPrivileges" responses: 200: description: "no error" schema: type: "array" items: description: | Describes a permission the user has to accept upon installing the plugin. type: "object" title: "PluginPrivilegeItem" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: - "Plugin" /plugins/pull: post: summary: "Install a plugin" operationId: "PluginPull" description: | Pulls and installs a plugin. After the plugin is installed, it can be enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable). produces: - "application/json" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "remote" in: "query" description: | Remote reference for plugin to install. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "name" in: "query" description: | Local name for the pulled plugin. The `:latest` tag is optional, and is used as the default if omitted. required: false type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/{name}/json: get: summary: "Inspect a plugin" operationId: "PluginInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}: delete: summary: "Remove a plugin" operationId: "PluginDelete" responses: 200: description: "no error" schema: $ref: "#/definitions/Plugin" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "force" in: "query" description: | Disable the plugin before removing. This may result in issues if the plugin is in use by a container. type: "boolean" default: false tags: ["Plugin"] /plugins/{name}/enable: post: summary: "Enable a plugin" operationId: "PluginEnable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "timeout" in: "query" description: "Set the HTTP client timeout (in seconds)" type: "integer" default: 0 tags: ["Plugin"] /plugins/{name}/disable: post: summary: "Disable a plugin" operationId: "PluginDisable" responses: 200: description: "no error" 404: description: "plugin is not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" tags: ["Plugin"] /plugins/{name}/upgrade: post: summary: "Upgrade a plugin" operationId: "PluginUpgrade" responses: 204: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "remote" in: "query" description: | Remote reference to upgrade to. The `:latest` tag is optional, and is used as the default if omitted. required: true type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration to use when pulling a plugin from a registry. Refer to the [authentication section](#section/Authentication) for details. type: "string" - name: "body" in: "body" schema: type: "array" items: description: | Describes a permission accepted by the user upon installing the plugin. type: "object" properties: Name: type: "string" Description: type: "string" Value: type: "array" items: type: "string" example: - Name: "network" Description: "" Value: - "host" - Name: "mount" Description: "" Value: - "/data" - Name: "device" Description: "" Value: - "/dev/cpu_dma_latency" tags: ["Plugin"] /plugins/create: post: summary: "Create a plugin" operationId: "PluginCreate" consumes: - "application/x-tar" responses: 204: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "query" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "tarContext" in: "body" description: "Path to tar containing plugin rootfs and manifest" schema: type: "string" format: "binary" tags: ["Plugin"] /plugins/{name}/push: post: summary: "Push a plugin" operationId: "PluginPush" description: | Push a plugin to the registry. parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" responses: 200: description: "no error" 404: description: "plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /plugins/{name}/set: post: summary: "Configure a plugin" operationId: "PluginSet" consumes: - "application/json" parameters: - name: "name" in: "path" description: | The name of the plugin. The `:latest` tag is optional, and is the default if omitted. required: true type: "string" - name: "body" in: "body" schema: type: "array" items: type: "string" example: ["DEBUG=1"] responses: 204: description: "No error" 404: description: "Plugin not installed" schema: $ref: "#/definitions/ErrorResponse" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Plugin"] /nodes: get: summary: "List nodes" operationId: "NodeList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Node" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" description: | Filters to process on the nodes list, encoded as JSON (a `map[string][]string`). Available filters: - `id=<node id>` - `label=<engine label>` - `membership=`(`accepted`|`pending`)` - `name=<node name>` - `node.label=<node label>` - `role=`(`manager`|`worker`)` type: "string" tags: ["Node"] /nodes/{id}: get: summary: "Inspect a node" operationId: "NodeInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Node" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true tags: ["Node"] delete: summary: "Delete a node" operationId: "NodeDelete" responses: 200: description: "no error" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the node" type: "string" required: true - name: "force" in: "query" description: "Force remove a node from the swarm" default: false type: "boolean" tags: ["Node"] /nodes/{id}/update: post: summary: "Update a node" operationId: "NodeUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such node" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID of the node" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/NodeSpec" - name: "version" in: "query" description: | The version number of the node object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Node"] /swarm: get: summary: "Inspect swarm" operationId: "SwarmInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Swarm" 404: description: "no such swarm" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/init: post: summary: "Initialize a new swarm" operationId: "SwarmInit" produces: - "application/json" - "text/plain" responses: 200: description: "no error" schema: description: "The node ID" type: "string" example: "7v2t30z9blmxuhnyo6s4cpenp" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the default swarm listening port is used. type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" DataPathPort: description: | DataPathPort specifies the data path port number for data traffic. Acceptable port range is 1024 to 49151. if no port is set or is set to 0, default port 4789 will be used. type: "integer" format: "uint32" DefaultAddrPool: description: | Default Address Pool specifies default subnet pools for global scope networks. type: "array" items: type: "string" example: ["10.10.0.0/16", "20.20.0.0/16"] ForceNewCluster: description: "Force creation of a new swarm." type: "boolean" SubnetSize: description: | SubnetSize specifies the subnet size of the networks created from the default subnet pool. type: "integer" format: "uint32" Spec: $ref: "#/definitions/SwarmSpec" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" DataPathPort: 4789 DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"] SubnetSize: 24 ForceNewCluster: false Spec: Orchestration: {} Raft: {} Dispatcher: {} CAConfig: {} EncryptionConfig: AutoLockManagers: false tags: ["Swarm"] /swarm/join: post: summary: "Join an existing swarm" operationId: "SwarmJoin" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is already part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: ListenAddr: description: | Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP). type: "string" AdvertiseAddr: description: | Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible. type: "string" DataPathAddr: description: | Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`, or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr` is used. The `DataPathAddr` specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter it is possible to separate the container data traffic from the management traffic of the cluster. type: "string" RemoteAddrs: description: | Addresses of manager nodes already participating in the swarm. type: "array" items: type: "string" JoinToken: description: "Secret token for joining this swarm." type: "string" example: ListenAddr: "0.0.0.0:2377" AdvertiseAddr: "192.168.1.1:2377" RemoteAddrs: - "node1:2377" JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2" tags: ["Swarm"] /swarm/leave: post: summary: "Leave a swarm" operationId: "SwarmLeave" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "force" description: | Force leave swarm, even if this is the last manager or that it will break the cluster. in: "query" type: "boolean" default: false tags: ["Swarm"] /swarm/update: post: summary: "Update a swarm" operationId: "SwarmUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: $ref: "#/definitions/SwarmSpec" - name: "version" in: "query" description: | The version number of the swarm object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true - name: "rotateWorkerToken" in: "query" description: "Rotate the worker join token." type: "boolean" default: false - name: "rotateManagerToken" in: "query" description: "Rotate the manager join token." type: "boolean" default: false - name: "rotateManagerUnlockKey" in: "query" description: "Rotate the manager unlock key." type: "boolean" default: false tags: ["Swarm"] /swarm/unlockkey: get: summary: "Get the unlock key" operationId: "SwarmUnlockkey" consumes: - "application/json" responses: 200: description: "no error" schema: type: "object" title: "UnlockKeyResponse" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /swarm/unlock: post: summary: "Unlock a locked manager" operationId: "SwarmUnlock" consumes: - "application/json" produces: - "application/json" parameters: - name: "body" in: "body" required: true schema: type: "object" properties: UnlockKey: description: "The swarm's unlock key." type: "string" example: UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8" responses: 200: description: "no error" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" tags: ["Swarm"] /services: get: summary: "List services" operationId: "ServiceList" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Service" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the services list. Available filters: - `id=<service id>` - `label=<service label>` - `mode=["replicated"|"global"]` - `name=<service name>` - name: "status" in: "query" type: "boolean" description: | Include service status, with count of running and desired tasks. tags: ["Service"] /services/create: post: summary: "Create a service" operationId: "ServiceCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: type: "object" title: "ServiceCreateResponse" properties: ID: description: "The ID of the created service." type: "string" Warning: description: "Optional warning message" type: "string" example: ID: "ak7w3gjqoa3kuz8xcpnyy0pvl" Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 403: description: "network is not eligible for services" schema: $ref: "#/definitions/ErrorResponse" 409: description: "name conflicts with an existing service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "web" TaskTemplate: ContainerSpec: Image: "nginx:alpine" Mounts: - ReadOnly: true Source: "web-data" Target: "/usr/share/nginx/html" Type: "volume" VolumeOptions: DriverConfig: {} Labels: com.example.something: "something-value" Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"] User: "33" DNSConfig: Nameservers: ["8.8.8.8"] Search: ["example.org"] Options: ["timeout:3"] Secrets: - File: Name: "www.example.org.key" UID: "33" GID: "33" Mode: 384 SecretID: "fpjqlhnwb19zds35k8wn80lq9" SecretName: "example_org_domain_key" LogDriver: Name: "json-file" Options: max-file: "3" max-size: "10M" Placement: {} Resources: Limits: MemoryBytes: 104857600 Reservations: {} RestartPolicy: Condition: "on-failure" Delay: 10000000000 MaxAttempts: 10 Mode: Replicated: Replicas: 4 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Ports: - Protocol: "tcp" PublishedPort: 8080 TargetPort: 80 Labels: foo: "bar" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}: get: summary: "Inspect a service" operationId: "ServiceInspect" responses: 200: description: "no error" schema: $ref: "#/definitions/Service" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "insertDefaults" in: "query" description: "Fill empty fields with default values." type: "boolean" default: false tags: ["Service"] delete: summary: "Delete a service" operationId: "ServiceDelete" responses: 200: description: "no error" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" tags: ["Service"] /services/{id}/update: post: summary: "Update a service" operationId: "ServiceUpdate" consumes: ["application/json"] produces: ["application/json"] responses: 200: description: "no error" schema: $ref: "#/definitions/ServiceUpdateResponse" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID or name of service." required: true type: "string" - name: "body" in: "body" required: true schema: allOf: - $ref: "#/definitions/ServiceSpec" - type: "object" example: Name: "top" TaskTemplate: ContainerSpec: Image: "busybox" Args: - "top" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ForceUpdate: 0 Mode: Replicated: Replicas: 1 UpdateConfig: Parallelism: 2 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 RollbackConfig: Parallelism: 1 Delay: 1000000000 FailureAction: "pause" Monitor: 15000000000 MaxFailureRatio: 0.15 EndpointSpec: Mode: "vip" - name: "version" in: "query" description: | The version number of the service object being updated. This is required to avoid conflicting writes. This version number should be the value as currently set on the service *before* the update. You can find the current version by calling `GET /services/{id}` required: true type: "integer" - name: "registryAuthFrom" in: "query" description: | If the `X-Registry-Auth` header is not specified, this parameter indicates where to find registry authorization credentials. type: "string" enum: ["spec", "previous-spec"] default: "spec" - name: "rollback" in: "query" description: | Set to this parameter to `previous` to cause a server-side rollback to the previous service spec. The supplied spec will be ignored in this case. type: "string" - name: "X-Registry-Auth" in: "header" description: | A base64url-encoded auth configuration for pulling from private registries. Refer to the [authentication section](#section/Authentication) for details. type: "string" tags: ["Service"] /services/{id}/logs: get: summary: "Get service logs" description: | Get `stdout` and `stderr` logs from a service. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "ServiceLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such service" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such service: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID or name of the service" type: "string" - name: "details" in: "query" description: "Show service context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Service"] /tasks: get: summary: "List tasks" operationId: "TaskList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Task" example: - ID: "0kzzo1i0y4jz6027t0k7aezc7" Version: Index: 71 CreatedAt: "2016-06-07T21:07:31.171892745Z" UpdatedAt: "2016-06-07T21:07:31.376370513Z" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:31.290032978Z" State: "running" Message: "started" ContainerStatus: ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035" PID: 677 DesiredState: "running" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.10/16" - ID: "1yljwbmlr8er2waf8orvqpwms" Version: Index: 30 CreatedAt: "2016-06-07T21:07:30.019104782Z" UpdatedAt: "2016-06-07T21:07:30.231958098Z" Name: "hopeful_cori" Spec: ContainerSpec: Image: "redis" Resources: Limits: {} Reservations: {} RestartPolicy: Condition: "any" MaxAttempts: 0 Placement: {} ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz" Slot: 1 NodeID: "60gvrl6tm78dmak4yl7srz94v" Status: Timestamp: "2016-06-07T21:07:30.202183143Z" State: "shutdown" Message: "shutdown" ContainerStatus: ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213" DesiredState: "shutdown" NetworksAttachments: - Network: ID: "4qvuz4ko70xaltuqbt8956gd1" Version: Index: 18 CreatedAt: "2016-06-07T20:31:11.912919752Z" UpdatedAt: "2016-06-07T21:07:29.955277358Z" Spec: Name: "ingress" Labels: com.docker.swarm.internal: "true" DriverConfiguration: {} IPAMOptions: Driver: {} Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" DriverState: Name: "overlay" Options: com.docker.network.driver.overlay.vxlanid_list: "256" IPAMOptions: Driver: Name: "default" Configs: - Subnet: "10.255.0.0/16" Gateway: "10.255.0.1" Addresses: - "10.255.0.5/16" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the tasks list. Available filters: - `desired-state=(running | shutdown | accepted)` - `id=<task id>` - `label=key` or `label="key=value"` - `name=<task name>` - `node=<node id or name>` - `service=<service name>` tags: ["Task"] /tasks/{id}: get: summary: "Inspect a task" operationId: "TaskInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Task" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "ID of the task" required: true type: "string" tags: ["Task"] /tasks/{id}/logs: get: summary: "Get task logs" description: | Get `stdout` and `stderr` logs from a task. See also [`/containers/{id}/logs`](#operation/ContainerLogs). **Note**: This endpoint works only for services with the `local`, `json-file` or `journald` logging drivers. operationId: "TaskLogs" responses: 200: description: "logs returned as a stream in response body" schema: type: "string" format: "binary" 404: description: "no such task" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such task: c2ada9df5af8" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true description: "ID of the task" type: "string" - name: "details" in: "query" description: "Show task context and extra details provided to logs." type: "boolean" default: false - name: "follow" in: "query" description: "Keep connection after returning logs." type: "boolean" default: false - name: "stdout" in: "query" description: "Return logs from `stdout`" type: "boolean" default: false - name: "stderr" in: "query" description: "Return logs from `stderr`" type: "boolean" default: false - name: "since" in: "query" description: "Only return logs since this time, as a UNIX timestamp" type: "integer" default: 0 - name: "timestamps" in: "query" description: "Add timestamps to every log line" type: "boolean" default: false - name: "tail" in: "query" description: | Only return this number of log lines from the end of the logs. Specify as an integer or `all` to output all log lines. type: "string" default: "all" tags: ["Task"] /secrets: get: summary: "List secrets" operationId: "SecretList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Secret" example: - ID: "blt1owaxmitz71s9v5zh81zun" Version: Index: 85 CreatedAt: "2017-07-20T13:55:28.678958722Z" UpdatedAt: "2017-07-20T13:55:28.678958722Z" Spec: Name: "mysql-passwd" Labels: some.label: "some.value" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the secrets list. Available filters: - `id=<secret id>` - `label=<key> or label=<key>=value` - `name=<secret name>` - `names=<secret name>` tags: ["Secret"] /secrets/create: post: summary: "Create a secret" operationId: "SecretCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/SecretSpec" - type: "object" example: Name: "app-key.crt" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" tags: ["Secret"] /secrets/{id}: get: summary: "Inspect a secret" operationId: "SecretInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Secret" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" Labels: foo: "bar" Driver: Name: "secret-bucket" Options: OptionA: "value for driver option A" OptionB: "value for driver option B" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] delete: summary: "Delete a secret" operationId: "SecretDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "secret not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the secret" tags: ["Secret"] /secrets/{id}/update: post: summary: "Update a Secret" operationId: "SecretUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such secret" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the secret" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/SecretSpec" description: | The spec of the secret to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [SecretInspect endpoint](#operation/SecretInspect) response values. - name: "version" in: "query" description: | The version number of the secret object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Secret"] /configs: get: summary: "List configs" operationId: "ConfigList" produces: - "application/json" responses: 200: description: "no error" schema: type: "array" items: $ref: "#/definitions/Config" example: - ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "server.conf" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "filters" in: "query" type: "string" description: | A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters: - `id=<config id>` - `label=<key> or label=<key>=value` - `name=<config name>` - `names=<config name>` tags: ["Config"] /configs/create: post: summary: "Create a config" operationId: "ConfigCreate" consumes: - "application/json" produces: - "application/json" responses: 201: description: "no error" schema: $ref: "#/definitions/IdResponse" 409: description: "name conflicts with an existing object" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "body" in: "body" schema: allOf: - $ref: "#/definitions/ConfigSpec" - type: "object" example: Name: "server.conf" Labels: foo: "bar" Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg==" tags: ["Config"] /configs/{id}: get: summary: "Inspect a config" operationId: "ConfigInspect" produces: - "application/json" responses: 200: description: "no error" schema: $ref: "#/definitions/Config" examples: application/json: ID: "ktnbjxoalbkvbvedmg1urrz8h" Version: Index: 11 CreatedAt: "2016-11-05T01:20:17.327670065Z" UpdatedAt: "2016-11-05T01:20:17.327670065Z" Spec: Name: "app-dev.crt" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] delete: summary: "Delete a config" operationId: "ConfigDelete" produces: - "application/json" responses: 204: description: "no error" 404: description: "config not found" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" required: true type: "string" description: "ID of the config" tags: ["Config"] /configs/{id}/update: post: summary: "Update a Config" operationId: "ConfigUpdate" responses: 200: description: "no error" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 404: description: "no such config" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" 503: description: "node is not part of a swarm" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "id" in: "path" description: "The ID or name of the config" type: "string" required: true - name: "body" in: "body" schema: $ref: "#/definitions/ConfigSpec" description: | The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values. - name: "version" in: "query" description: | The version number of the config object being updated. This is required to avoid conflicting writes. type: "integer" format: "int64" required: true tags: ["Config"] /distribution/{name}/json: get: summary: "Get image information from the registry" description: | Return image digest and platform information by contacting the registry. operationId: "DistributionInspect" produces: - "application/json" responses: 200: description: "descriptor and platform information" schema: type: "object" x-go-name: DistributionInspect title: "DistributionInspectResponse" required: [Descriptor, Platforms] properties: Descriptor: type: "object" description: | A descriptor struct containing digest, media type, and size. properties: MediaType: type: "string" Size: type: "integer" format: "int64" Digest: type: "string" URLs: type: "array" items: type: "string" Platforms: type: "array" description: | An array containing all platforms supported by the image. items: type: "object" properties: Architecture: type: "string" OS: type: "string" OSVersion: type: "string" OSFeatures: type: "array" items: type: "string" Variant: type: "string" Features: type: "array" items: type: "string" examples: application/json: Descriptor: MediaType: "application/vnd.docker.distribution.manifest.v2+json" Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96" Size: 3987495 URLs: - "" Platforms: - Architecture: "amd64" OS: "linux" OSVersion: "" OSFeatures: - "" Variant: "" Features: - "" 401: description: "Failed authentication or no image found" schema: $ref: "#/definitions/ErrorResponse" examples: application/json: message: "No such image: someimage (tag: latest)" 500: description: "Server error" schema: $ref: "#/definitions/ErrorResponse" parameters: - name: "name" in: "path" description: "Image name or id" type: "string" required: true tags: ["Distribution"] /session: post: summary: "Initialize interactive session" description: | Start a new interactive session with a server. Session allows server to call back to the client for advanced capabilities. ### Hijacking This endpoint hijacks the HTTP connection to HTTP2 transport that allows the client to expose gPRC services on that connection. For example, the client sends this request to upgrade the connection: ``` POST /session HTTP/1.1 Upgrade: h2c Connection: Upgrade ``` The Docker daemon responds with a `101 UPGRADED` response follow with the raw stream: ``` HTTP/1.1 101 UPGRADED Connection: Upgrade Upgrade: h2c ``` operationId: "Session" produces: - "application/vnd.docker.raw-stream" responses: 101: description: "no error, hijacking successful" 400: description: "bad parameter" schema: $ref: "#/definitions/ErrorResponse" 500: description: "server error" schema: $ref: "#/definitions/ErrorResponse" tags: ["Session"]
rvolosatovs
5e4da6cc8269c9b766421f22f5824f3e23c89e76
c81abefdb1f907bbc5f5b8b1b1fba942821ae5b3
I should have double-checked before committing, next time I will do that!
rvolosatovs
4,571
moby/moby
42,598
Only check if route overlaps routes with scope: LINK
closes https://github.com/moby/moby/issues/41525 partially addresses https://github.com/moby/moby/issues/33925 Signed-off-by: Alex Nordlund <[email protected]> This implements the solution mentioned in #33925, and while it doesn't close that specific issue (since it's technically two different issues) it does remove one of the issues which may lead people there. and should close #41525 See https://github.com/moby/moby/issues/33925#issuecomment-702470693 for a great explanation. But in my case we are using split VPN and we have routes set by our network team to make sure that docker works, unfortunately the fact that they set the routes also makes docker think the network is currently in use and prevents us from using it in the `default-address-pools`. **- What I did** I added a `network.Scope == netlink.SCOPE_LINK` check to the route overlapping check. **- How I did it** **- How to verify it** **- Description for the changelog** When checking for overlapping routes on Linux only consider ones where the scope is `LINK` **- A picture of a cute animal (not mandatory but encouraged)** Here's the dog I live with silently judging our VPN situation at a safe distance ![image](https://user-images.githubusercontent.com/905507/124491796-f38e3600-ddb3-11eb-915f-48cdeaed2dd8.png)
null
2021-07-05 15:11:36+00:00
2021-08-26 07:58:35+00:00
libnetwork/netutils/utils_linux.go
// +build linux // Network utility functions. package netutils import ( "fmt" "net" "strings" "github.com/docker/docker/libnetwork/ipamutils" "github.com/docker/docker/libnetwork/ns" "github.com/docker/docker/libnetwork/osl" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/pkg/errors" "github.com/vishvananda/netlink" ) var ( networkGetRoutesFct func(netlink.Link, int) ([]netlink.Route, error) ) // CheckRouteOverlaps checks whether the passed network overlaps with any existing routes func CheckRouteOverlaps(toCheck *net.IPNet) error { if networkGetRoutesFct == nil { networkGetRoutesFct = ns.NlHandle().RouteList } networks, err := networkGetRoutesFct(nil, netlink.FAMILY_V4) if err != nil { return err } for _, network := range networks { if network.Dst != nil && NetworkOverlaps(toCheck, network.Dst) { return ErrNetworkOverlaps } } return nil } // GenerateIfaceName returns an interface name using the passed in // prefix and the length of random bytes. The api ensures that the // there are is no interface which exists with that name. func GenerateIfaceName(nlh *netlink.Handle, prefix string, len int) (string, error) { linkByName := netlink.LinkByName if nlh != nil { linkByName = nlh.LinkByName } for i := 0; i < 3; i++ { name, err := GenerateRandomName(prefix, len) if err != nil { continue } _, err = linkByName(name) if err != nil { if strings.Contains(err.Error(), "not found") { return name, nil } return "", err } } return "", types.InternalErrorf("could not generate interface name") } // ElectInterfaceAddresses looks for an interface on the OS with the // specified name and returns returns all its IPv4 and IPv6 addresses in CIDR notation. // If a failure in retrieving the addresses or no IPv4 address is found, an error is returned. // If the interface does not exist, it chooses from a predefined // list the first IPv4 address which does not conflict with other // interfaces on the system. func ElectInterfaceAddresses(name string) ([]*net.IPNet, []*net.IPNet, error) { var v4Nets, v6Nets []*net.IPNet defer osl.InitOSContext()() link, _ := ns.NlHandle().LinkByName(name) if link != nil { v4addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V4) if err != nil { return nil, nil, err } v6addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V6) if err != nil { return nil, nil, err } for _, nlAddr := range v4addr { v4Nets = append(v4Nets, nlAddr.IPNet) } for _, nlAddr := range v6addr { v6Nets = append(v6Nets, nlAddr.IPNet) } } if link == nil || len(v4Nets) == 0 { // Choose from predefined local scope networks v4Net, err := FindAvailableNetwork(ipamutils.PredefinedLocalScopeDefaultNetworks) if err != nil { return nil, nil, errors.Wrapf(err, "PredefinedLocalScopeDefaultNetworks List: %+v", ipamutils.PredefinedLocalScopeDefaultNetworks) } v4Nets = append(v4Nets, v4Net) } return v4Nets, v6Nets, nil } // FindAvailableNetwork returns a network from the passed list which does not // overlap with existing interfaces in the system func FindAvailableNetwork(list []*net.IPNet) (*net.IPNet, error) { // We don't check for an error here, because we don't really care if we // can't read /etc/resolv.conf. So instead we skip the append if resolvConf // is nil. It either doesn't exist, or we can't read it for some reason. var nameservers []string if rc, err := resolvconf.Get(); err == nil { nameservers = resolvconf.GetNameserversAsCIDR(rc.Content) } for _, nw := range list { if err := CheckNameserverOverlaps(nameservers, nw); err == nil { if err := CheckRouteOverlaps(nw); err == nil { return nw, nil } } } return nil, fmt.Errorf("no available network") }
// +build linux // Network utility functions. package netutils import ( "fmt" "net" "strings" "github.com/docker/docker/libnetwork/ipamutils" "github.com/docker/docker/libnetwork/ns" "github.com/docker/docker/libnetwork/osl" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/pkg/errors" "github.com/vishvananda/netlink" ) var ( networkGetRoutesFct func(netlink.Link, int) ([]netlink.Route, error) ) // CheckRouteOverlaps checks whether the passed network overlaps with any existing routes func CheckRouteOverlaps(toCheck *net.IPNet) error { if networkGetRoutesFct == nil { networkGetRoutesFct = ns.NlHandle().RouteList } networks, err := networkGetRoutesFct(nil, netlink.FAMILY_V4) if err != nil { return err } for _, network := range networks { if network.Dst != nil && network.Scope == netlink.SCOPE_LINK && NetworkOverlaps(toCheck, network.Dst) { return ErrNetworkOverlaps } } return nil } // GenerateIfaceName returns an interface name using the passed in // prefix and the length of random bytes. The api ensures that the // there are is no interface which exists with that name. func GenerateIfaceName(nlh *netlink.Handle, prefix string, len int) (string, error) { linkByName := netlink.LinkByName if nlh != nil { linkByName = nlh.LinkByName } for i := 0; i < 3; i++ { name, err := GenerateRandomName(prefix, len) if err != nil { continue } _, err = linkByName(name) if err != nil { if strings.Contains(err.Error(), "not found") { return name, nil } return "", err } } return "", types.InternalErrorf("could not generate interface name") } // ElectInterfaceAddresses looks for an interface on the OS with the // specified name and returns returns all its IPv4 and IPv6 addresses in CIDR notation. // If a failure in retrieving the addresses or no IPv4 address is found, an error is returned. // If the interface does not exist, it chooses from a predefined // list the first IPv4 address which does not conflict with other // interfaces on the system. func ElectInterfaceAddresses(name string) ([]*net.IPNet, []*net.IPNet, error) { var v4Nets, v6Nets []*net.IPNet defer osl.InitOSContext()() link, _ := ns.NlHandle().LinkByName(name) if link != nil { v4addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V4) if err != nil { return nil, nil, err } v6addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V6) if err != nil { return nil, nil, err } for _, nlAddr := range v4addr { v4Nets = append(v4Nets, nlAddr.IPNet) } for _, nlAddr := range v6addr { v6Nets = append(v6Nets, nlAddr.IPNet) } } if link == nil || len(v4Nets) == 0 { // Choose from predefined local scope networks v4Net, err := FindAvailableNetwork(ipamutils.PredefinedLocalScopeDefaultNetworks) if err != nil { return nil, nil, errors.Wrapf(err, "PredefinedLocalScopeDefaultNetworks List: %+v", ipamutils.PredefinedLocalScopeDefaultNetworks) } v4Nets = append(v4Nets, v4Net) } return v4Nets, v6Nets, nil } // FindAvailableNetwork returns a network from the passed list which does not // overlap with existing interfaces in the system func FindAvailableNetwork(list []*net.IPNet) (*net.IPNet, error) { // We don't check for an error here, because we don't really care if we // can't read /etc/resolv.conf. So instead we skip the append if resolvConf // is nil. It either doesn't exist, or we can't read it for some reason. var nameservers []string if rc, err := resolvconf.Get(); err == nil { nameservers = resolvconf.GetNameserversAsCIDR(rc.Content) } for _, nw := range list { if err := CheckNameserverOverlaps(nameservers, nw); err == nil { if err := CheckRouteOverlaps(nw); err == nil { return nw, nil } } } return nil, fmt.Errorf("no available network") }
deepy
8207c05cfcff9853e3e65b2b87b8355d9f65e734
2bb21b85c2a6688b0538a0b482ae9eeee4d33a9a
Curious; should this check only be performed in this case, or also in other cases where `NetworkOverlaps` is used? (mostly wondering if this should be done in `NetworkOverlaps()` itself)?
thaJeztah
4,572
moby/moby
42,598
Only check if route overlaps routes with scope: LINK
closes https://github.com/moby/moby/issues/41525 partially addresses https://github.com/moby/moby/issues/33925 Signed-off-by: Alex Nordlund <[email protected]> This implements the solution mentioned in #33925, and while it doesn't close that specific issue (since it's technically two different issues) it does remove one of the issues which may lead people there. and should close #41525 See https://github.com/moby/moby/issues/33925#issuecomment-702470693 for a great explanation. But in my case we are using split VPN and we have routes set by our network team to make sure that docker works, unfortunately the fact that they set the routes also makes docker think the network is currently in use and prevents us from using it in the `default-address-pools`. **- What I did** I added a `network.Scope == netlink.SCOPE_LINK` check to the route overlapping check. **- How I did it** **- How to verify it** **- Description for the changelog** When checking for overlapping routes on Linux only consider ones where the scope is `LINK` **- A picture of a cute animal (not mandatory but encouraged)** Here's the dog I live with silently judging our VPN situation at a safe distance ![image](https://user-images.githubusercontent.com/905507/124491796-f38e3600-ddb3-11eb-915f-48cdeaed2dd8.png)
null
2021-07-05 15:11:36+00:00
2021-08-26 07:58:35+00:00
libnetwork/netutils/utils_linux.go
// +build linux // Network utility functions. package netutils import ( "fmt" "net" "strings" "github.com/docker/docker/libnetwork/ipamutils" "github.com/docker/docker/libnetwork/ns" "github.com/docker/docker/libnetwork/osl" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/pkg/errors" "github.com/vishvananda/netlink" ) var ( networkGetRoutesFct func(netlink.Link, int) ([]netlink.Route, error) ) // CheckRouteOverlaps checks whether the passed network overlaps with any existing routes func CheckRouteOverlaps(toCheck *net.IPNet) error { if networkGetRoutesFct == nil { networkGetRoutesFct = ns.NlHandle().RouteList } networks, err := networkGetRoutesFct(nil, netlink.FAMILY_V4) if err != nil { return err } for _, network := range networks { if network.Dst != nil && NetworkOverlaps(toCheck, network.Dst) { return ErrNetworkOverlaps } } return nil } // GenerateIfaceName returns an interface name using the passed in // prefix and the length of random bytes. The api ensures that the // there are is no interface which exists with that name. func GenerateIfaceName(nlh *netlink.Handle, prefix string, len int) (string, error) { linkByName := netlink.LinkByName if nlh != nil { linkByName = nlh.LinkByName } for i := 0; i < 3; i++ { name, err := GenerateRandomName(prefix, len) if err != nil { continue } _, err = linkByName(name) if err != nil { if strings.Contains(err.Error(), "not found") { return name, nil } return "", err } } return "", types.InternalErrorf("could not generate interface name") } // ElectInterfaceAddresses looks for an interface on the OS with the // specified name and returns returns all its IPv4 and IPv6 addresses in CIDR notation. // If a failure in retrieving the addresses or no IPv4 address is found, an error is returned. // If the interface does not exist, it chooses from a predefined // list the first IPv4 address which does not conflict with other // interfaces on the system. func ElectInterfaceAddresses(name string) ([]*net.IPNet, []*net.IPNet, error) { var v4Nets, v6Nets []*net.IPNet defer osl.InitOSContext()() link, _ := ns.NlHandle().LinkByName(name) if link != nil { v4addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V4) if err != nil { return nil, nil, err } v6addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V6) if err != nil { return nil, nil, err } for _, nlAddr := range v4addr { v4Nets = append(v4Nets, nlAddr.IPNet) } for _, nlAddr := range v6addr { v6Nets = append(v6Nets, nlAddr.IPNet) } } if link == nil || len(v4Nets) == 0 { // Choose from predefined local scope networks v4Net, err := FindAvailableNetwork(ipamutils.PredefinedLocalScopeDefaultNetworks) if err != nil { return nil, nil, errors.Wrapf(err, "PredefinedLocalScopeDefaultNetworks List: %+v", ipamutils.PredefinedLocalScopeDefaultNetworks) } v4Nets = append(v4Nets, v4Net) } return v4Nets, v6Nets, nil } // FindAvailableNetwork returns a network from the passed list which does not // overlap with existing interfaces in the system func FindAvailableNetwork(list []*net.IPNet) (*net.IPNet, error) { // We don't check for an error here, because we don't really care if we // can't read /etc/resolv.conf. So instead we skip the append if resolvConf // is nil. It either doesn't exist, or we can't read it for some reason. var nameservers []string if rc, err := resolvconf.Get(); err == nil { nameservers = resolvconf.GetNameserversAsCIDR(rc.Content) } for _, nw := range list { if err := CheckNameserverOverlaps(nameservers, nw); err == nil { if err := CheckRouteOverlaps(nw); err == nil { return nw, nil } } } return nil, fmt.Errorf("no available network") }
// +build linux // Network utility functions. package netutils import ( "fmt" "net" "strings" "github.com/docker/docker/libnetwork/ipamutils" "github.com/docker/docker/libnetwork/ns" "github.com/docker/docker/libnetwork/osl" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/pkg/errors" "github.com/vishvananda/netlink" ) var ( networkGetRoutesFct func(netlink.Link, int) ([]netlink.Route, error) ) // CheckRouteOverlaps checks whether the passed network overlaps with any existing routes func CheckRouteOverlaps(toCheck *net.IPNet) error { if networkGetRoutesFct == nil { networkGetRoutesFct = ns.NlHandle().RouteList } networks, err := networkGetRoutesFct(nil, netlink.FAMILY_V4) if err != nil { return err } for _, network := range networks { if network.Dst != nil && network.Scope == netlink.SCOPE_LINK && NetworkOverlaps(toCheck, network.Dst) { return ErrNetworkOverlaps } } return nil } // GenerateIfaceName returns an interface name using the passed in // prefix and the length of random bytes. The api ensures that the // there are is no interface which exists with that name. func GenerateIfaceName(nlh *netlink.Handle, prefix string, len int) (string, error) { linkByName := netlink.LinkByName if nlh != nil { linkByName = nlh.LinkByName } for i := 0; i < 3; i++ { name, err := GenerateRandomName(prefix, len) if err != nil { continue } _, err = linkByName(name) if err != nil { if strings.Contains(err.Error(), "not found") { return name, nil } return "", err } } return "", types.InternalErrorf("could not generate interface name") } // ElectInterfaceAddresses looks for an interface on the OS with the // specified name and returns returns all its IPv4 and IPv6 addresses in CIDR notation. // If a failure in retrieving the addresses or no IPv4 address is found, an error is returned. // If the interface does not exist, it chooses from a predefined // list the first IPv4 address which does not conflict with other // interfaces on the system. func ElectInterfaceAddresses(name string) ([]*net.IPNet, []*net.IPNet, error) { var v4Nets, v6Nets []*net.IPNet defer osl.InitOSContext()() link, _ := ns.NlHandle().LinkByName(name) if link != nil { v4addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V4) if err != nil { return nil, nil, err } v6addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V6) if err != nil { return nil, nil, err } for _, nlAddr := range v4addr { v4Nets = append(v4Nets, nlAddr.IPNet) } for _, nlAddr := range v6addr { v6Nets = append(v6Nets, nlAddr.IPNet) } } if link == nil || len(v4Nets) == 0 { // Choose from predefined local scope networks v4Net, err := FindAvailableNetwork(ipamutils.PredefinedLocalScopeDefaultNetworks) if err != nil { return nil, nil, errors.Wrapf(err, "PredefinedLocalScopeDefaultNetworks List: %+v", ipamutils.PredefinedLocalScopeDefaultNetworks) } v4Nets = append(v4Nets, v4Net) } return v4Nets, v6Nets, nil } // FindAvailableNetwork returns a network from the passed list which does not // overlap with existing interfaces in the system func FindAvailableNetwork(list []*net.IPNet) (*net.IPNet, error) { // We don't check for an error here, because we don't really care if we // can't read /etc/resolv.conf. So instead we skip the append if resolvConf // is nil. It either doesn't exist, or we can't read it for some reason. var nameservers []string if rc, err := resolvconf.Get(); err == nil { nameservers = resolvconf.GetNameserversAsCIDR(rc.Content) } for _, nw := range list { if err := CheckNameserverOverlaps(nameservers, nw); err == nil { if err := CheckRouteOverlaps(nw); err == nil { return nw, nil } } } return nil, fmt.Errorf("no available network") }
deepy
8207c05cfcff9853e3e65b2b87b8355d9f65e734
2bb21b85c2a6688b0538a0b482ae9eeee4d33a9a
I think adding it to `NetworkOverlaps()` risks surprising someone else glancing at the code, the networks are overlapping so `NetworkOverlaps()` is performing exactly as expected, but during this specific comparison we only care about specific routes. So without renaming it I'd be reluctant to put it there Though looking at `deleteInterfaceBySubnet()` from `ov_utils.go` maybe it should be added, unfortunately I don't know anything about the overlay network so I have some code and documentation reading before I dare comment on that. However, `checkOverlap()` from `ov_network.go` which uses `NetworkOverlaps()` should probably inherit this.
deepy
4,573
moby/moby
42,598
Only check if route overlaps routes with scope: LINK
closes https://github.com/moby/moby/issues/41525 partially addresses https://github.com/moby/moby/issues/33925 Signed-off-by: Alex Nordlund <[email protected]> This implements the solution mentioned in #33925, and while it doesn't close that specific issue (since it's technically two different issues) it does remove one of the issues which may lead people there. and should close #41525 See https://github.com/moby/moby/issues/33925#issuecomment-702470693 for a great explanation. But in my case we are using split VPN and we have routes set by our network team to make sure that docker works, unfortunately the fact that they set the routes also makes docker think the network is currently in use and prevents us from using it in the `default-address-pools`. **- What I did** I added a `network.Scope == netlink.SCOPE_LINK` check to the route overlapping check. **- How I did it** **- How to verify it** **- Description for the changelog** When checking for overlapping routes on Linux only consider ones where the scope is `LINK` **- A picture of a cute animal (not mandatory but encouraged)** Here's the dog I live with silently judging our VPN situation at a safe distance ![image](https://user-images.githubusercontent.com/905507/124491796-f38e3600-ddb3-11eb-915f-48cdeaed2dd8.png)
null
2021-07-05 15:11:36+00:00
2021-08-26 07:58:35+00:00
libnetwork/netutils/utils_linux.go
// +build linux // Network utility functions. package netutils import ( "fmt" "net" "strings" "github.com/docker/docker/libnetwork/ipamutils" "github.com/docker/docker/libnetwork/ns" "github.com/docker/docker/libnetwork/osl" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/pkg/errors" "github.com/vishvananda/netlink" ) var ( networkGetRoutesFct func(netlink.Link, int) ([]netlink.Route, error) ) // CheckRouteOverlaps checks whether the passed network overlaps with any existing routes func CheckRouteOverlaps(toCheck *net.IPNet) error { if networkGetRoutesFct == nil { networkGetRoutesFct = ns.NlHandle().RouteList } networks, err := networkGetRoutesFct(nil, netlink.FAMILY_V4) if err != nil { return err } for _, network := range networks { if network.Dst != nil && NetworkOverlaps(toCheck, network.Dst) { return ErrNetworkOverlaps } } return nil } // GenerateIfaceName returns an interface name using the passed in // prefix and the length of random bytes. The api ensures that the // there are is no interface which exists with that name. func GenerateIfaceName(nlh *netlink.Handle, prefix string, len int) (string, error) { linkByName := netlink.LinkByName if nlh != nil { linkByName = nlh.LinkByName } for i := 0; i < 3; i++ { name, err := GenerateRandomName(prefix, len) if err != nil { continue } _, err = linkByName(name) if err != nil { if strings.Contains(err.Error(), "not found") { return name, nil } return "", err } } return "", types.InternalErrorf("could not generate interface name") } // ElectInterfaceAddresses looks for an interface on the OS with the // specified name and returns returns all its IPv4 and IPv6 addresses in CIDR notation. // If a failure in retrieving the addresses or no IPv4 address is found, an error is returned. // If the interface does not exist, it chooses from a predefined // list the first IPv4 address which does not conflict with other // interfaces on the system. func ElectInterfaceAddresses(name string) ([]*net.IPNet, []*net.IPNet, error) { var v4Nets, v6Nets []*net.IPNet defer osl.InitOSContext()() link, _ := ns.NlHandle().LinkByName(name) if link != nil { v4addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V4) if err != nil { return nil, nil, err } v6addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V6) if err != nil { return nil, nil, err } for _, nlAddr := range v4addr { v4Nets = append(v4Nets, nlAddr.IPNet) } for _, nlAddr := range v6addr { v6Nets = append(v6Nets, nlAddr.IPNet) } } if link == nil || len(v4Nets) == 0 { // Choose from predefined local scope networks v4Net, err := FindAvailableNetwork(ipamutils.PredefinedLocalScopeDefaultNetworks) if err != nil { return nil, nil, errors.Wrapf(err, "PredefinedLocalScopeDefaultNetworks List: %+v", ipamutils.PredefinedLocalScopeDefaultNetworks) } v4Nets = append(v4Nets, v4Net) } return v4Nets, v6Nets, nil } // FindAvailableNetwork returns a network from the passed list which does not // overlap with existing interfaces in the system func FindAvailableNetwork(list []*net.IPNet) (*net.IPNet, error) { // We don't check for an error here, because we don't really care if we // can't read /etc/resolv.conf. So instead we skip the append if resolvConf // is nil. It either doesn't exist, or we can't read it for some reason. var nameservers []string if rc, err := resolvconf.Get(); err == nil { nameservers = resolvconf.GetNameserversAsCIDR(rc.Content) } for _, nw := range list { if err := CheckNameserverOverlaps(nameservers, nw); err == nil { if err := CheckRouteOverlaps(nw); err == nil { return nw, nil } } } return nil, fmt.Errorf("no available network") }
// +build linux // Network utility functions. package netutils import ( "fmt" "net" "strings" "github.com/docker/docker/libnetwork/ipamutils" "github.com/docker/docker/libnetwork/ns" "github.com/docker/docker/libnetwork/osl" "github.com/docker/docker/libnetwork/resolvconf" "github.com/docker/docker/libnetwork/types" "github.com/pkg/errors" "github.com/vishvananda/netlink" ) var ( networkGetRoutesFct func(netlink.Link, int) ([]netlink.Route, error) ) // CheckRouteOverlaps checks whether the passed network overlaps with any existing routes func CheckRouteOverlaps(toCheck *net.IPNet) error { if networkGetRoutesFct == nil { networkGetRoutesFct = ns.NlHandle().RouteList } networks, err := networkGetRoutesFct(nil, netlink.FAMILY_V4) if err != nil { return err } for _, network := range networks { if network.Dst != nil && network.Scope == netlink.SCOPE_LINK && NetworkOverlaps(toCheck, network.Dst) { return ErrNetworkOverlaps } } return nil } // GenerateIfaceName returns an interface name using the passed in // prefix and the length of random bytes. The api ensures that the // there are is no interface which exists with that name. func GenerateIfaceName(nlh *netlink.Handle, prefix string, len int) (string, error) { linkByName := netlink.LinkByName if nlh != nil { linkByName = nlh.LinkByName } for i := 0; i < 3; i++ { name, err := GenerateRandomName(prefix, len) if err != nil { continue } _, err = linkByName(name) if err != nil { if strings.Contains(err.Error(), "not found") { return name, nil } return "", err } } return "", types.InternalErrorf("could not generate interface name") } // ElectInterfaceAddresses looks for an interface on the OS with the // specified name and returns returns all its IPv4 and IPv6 addresses in CIDR notation. // If a failure in retrieving the addresses or no IPv4 address is found, an error is returned. // If the interface does not exist, it chooses from a predefined // list the first IPv4 address which does not conflict with other // interfaces on the system. func ElectInterfaceAddresses(name string) ([]*net.IPNet, []*net.IPNet, error) { var v4Nets, v6Nets []*net.IPNet defer osl.InitOSContext()() link, _ := ns.NlHandle().LinkByName(name) if link != nil { v4addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V4) if err != nil { return nil, nil, err } v6addr, err := ns.NlHandle().AddrList(link, netlink.FAMILY_V6) if err != nil { return nil, nil, err } for _, nlAddr := range v4addr { v4Nets = append(v4Nets, nlAddr.IPNet) } for _, nlAddr := range v6addr { v6Nets = append(v6Nets, nlAddr.IPNet) } } if link == nil || len(v4Nets) == 0 { // Choose from predefined local scope networks v4Net, err := FindAvailableNetwork(ipamutils.PredefinedLocalScopeDefaultNetworks) if err != nil { return nil, nil, errors.Wrapf(err, "PredefinedLocalScopeDefaultNetworks List: %+v", ipamutils.PredefinedLocalScopeDefaultNetworks) } v4Nets = append(v4Nets, v4Net) } return v4Nets, v6Nets, nil } // FindAvailableNetwork returns a network from the passed list which does not // overlap with existing interfaces in the system func FindAvailableNetwork(list []*net.IPNet) (*net.IPNet, error) { // We don't check for an error here, because we don't really care if we // can't read /etc/resolv.conf. So instead we skip the append if resolvConf // is nil. It either doesn't exist, or we can't read it for some reason. var nameservers []string if rc, err := resolvconf.Get(); err == nil { nameservers = resolvconf.GetNameserversAsCIDR(rc.Content) } for _, nw := range list { if err := CheckNameserverOverlaps(nameservers, nw); err == nil { if err := CheckRouteOverlaps(nw); err == nil { return nw, nil } } } return nil, fmt.Errorf("no available network") }
deepy
8207c05cfcff9853e3e65b2b87b8355d9f65e734
2bb21b85c2a6688b0538a0b482ae9eeee4d33a9a
Having had a closer look, I don't think it should be added to `NetworkOverlaps()`, this specific check is only being done in `CheckRouteOverlaps()` the other places where this is used it's being used to literally compare networks. So for direct usages of this, all looks fine: - `deleteInterfaceBySubnet()` from `overlay` is checking subnets - `CheckNameserverOverlaps()` from `netutils` checks if a given IP is in a specific subnet - `CheckRouteOverlaps()` from `netutils` is the only one looking at routes and that's being addressed in this PR I've also pushed updated tests that verify the routes (checking both that the overlap works and that it does not overlap with routes with the wrong scope)
deepy
4,574
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
See https://github.com/gotestyourself/gotestsum#custom-go-test-command for documentation
rvolosatovs
4,575
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
May need to check if the Jenkinsfile uploads the new files as well
thaJeztah
4,576
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
Don't have much experience with Jenkins and the existing functionality related to this is not trivial; I will have to study the docs, but any pointers meanwhile would be very welcome!
rvolosatovs
4,577
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
I'm not great at Jenkinsfiles either (mostly "do by example, and a lot of Google search") 🤣 The essential bit is that each of the stages will have a step that uploads the results; we probably need to either add another one of those (for stages that run the libnetwork tests), or perhaps make it use a wildcard; see, e.g. https://github.com/moby/moby/blob/master/Jenkinsfile#L202
thaJeztah
4,578
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
Looks like the file switched from using tabs to spaces for indentation; can you run `shfmt` to format it? ```bash shfmt -bn -ci -sr -w hack/test/unit ``` Options taken from https://github.com/moby/moby/blob/12f1b3ce43fe4aea5a41750bcc20f2a7dd67dbfc/hack/validate/shfmt#L4-L11
thaJeztah
4,579
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
I tried something, please confirm that it makes sense, since I am not sure how to test this.
rvolosatovs
4,580
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
perhaps would be good to add a comment in this branch/block to describe what it's different from the other block (`-p 1`)
thaJeztah
4,581
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
And why :)
cpuguy83
4,582
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
did it in https://github.com/moby/moby/pull/42594/commits/02d3cedd8e3cfbde83014f0b0928d525ba857b8f
rvolosatovs
4,583
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
Hm.. I'm a bit on the fence on the second commit abstracting away things into `run_test_unit` - as seen here, it still needs various parameters to be passed (test flags, prefix, package-list), but now it requires me to look back-and-forth between this line and the function above to understand what each of those do. I think it's fine to keep the code inline here. It's not "pretty", but reduces the cognitive overhead of having to look up what `run_test_unit` does, and what parameters it takes.
thaJeztah
4,584
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
Perhaps combine the "NOTE" above with this comment; I think the comments combined makes it clearer what's happening. Something like; ```suggestion # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. ```
thaJeztah
4,585
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
@tianon can correct me, but I think you want `-n` here ```suggestion if [ -n "${base_pkg_list}" ]; then ```
samuelkarp
4,586
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
Same as above with `-n`
samuelkarp
4,587
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
These are all identical: ```bash #!/usr/bin/env bash empty='' nonempty='nonempty' [ "${empty}" ] && echo "non-empty" || echo "empty" [ -n "${empty}" ] && echo "non-empty" || echo "empty" [ ! -z "${empty}" ] && echo "non-empty" || echo "empty" [ "${empty}" != "" ] && echo "non-empty" || echo "empty" [ "${nonempty}" ] && echo "non-empty" || echo "empty" [ -n "${nonempty}" ] && echo "non-empty" || echo "empty" [ ! -z "${nonempty}" ] && echo "non-empty" || echo "empty" [ "${nonempty}" != "" ] && echo "non-empty" || echo "empty" ``` Produces: ``` empty empty empty empty non-empty non-empty non-empty non-empty ``` ``` $ bash --version GNU bash, version 4.4.23(1)-release (x86_64-unknown-linux-gnu) Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ``` Additionally, from `man test` (which is used by `[`): ``` -n STRING the length of STRING is nonzero STRING equivalent to -n STRING ``` Both styles are used across repo, sometimes even mixed within one file. In `/hack`, for example: https://github.com/moby/moby/blob/656a5e2bdf8cc2be64fee1459821b7045b755b0d/hack/make.sh#L44 https://github.com/moby/moby/blob/656a5e2bdf8cc2be64fee1459821b7045b755b0d/hack/make.sh#L67 https://github.com/moby/moby/blob/656a5e2bdf8cc2be64fee1459821b7045b755b0d/hack/validate/swagger-gen#L14 https://github.com/moby/moby/blob/656a5e2bdf8cc2be64fee1459821b7045b755b0d/hack/make/run#L31 https://github.com/moby/moby/blob/656a5e2bdf8cc2be64fee1459821b7045b755b0d/hack/make.sh#L144 https://github.com/moby/moby/blob/656a5e2bdf8cc2be64fee1459821b7045b755b0d/hack/make/run#L35 Is there a guideline, which clearly defines how this is to be done?
rvolosatovs
4,588
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
That should not matter, see above
rvolosatovs
4,589
moby/moby
42,594
hack/test/unit: run `libnetwork` tests sequentially
**- What I did** Run `libnetwork` tests sequentially. See https://github.com/moby/moby/issues/42458#issuecomment-873216754 for explanation. Closes #42458 maybe more Note, that many tests in `libnetwork/...` already depend on the assumption that they are run sequentially and do not "easily" support things like `-run` due to that fact. E.g. this test is a prime example of such behavior: https://github.com/moby/moby/blob/45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc/libnetwork/iptables/iptables_test.go#L1-L327 I, personally, don't think that `-p=1` is the "correct" solution here. Although, to do this "right" I think we'd have to rewrite a large portion of `libnetwork` tests and I do not think that is worth it at this point. **- How I did it** Run `gotestsum` at most twice and pass `-p=1` when running unit tests in `/libnetwork` namespace **- How to verify it** e.g. `TESTDIRS='github.com/docker/docker/libnetwork/iptables github.com/docker/docker/libnetwork/drivers/bridge' make test-unit` should pass (and sometimes fail without this change) **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> **- A picture of a cute animal (not mandatory but encouraged)**
null
2021-07-02 19:29:56+00:00
2021-08-03 16:52:02+00:00
hack/test/unit
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") echo "${pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${pkg_list}
#!/usr/bin/env bash # # Run unit tests and create report # # TESTFLAGS - add additional test flags. Ex: # # TESTFLAGS='-v -run TestBuild' hack/test/unit # # TESTDIRS - run tests for specified packages. Ex: # # TESTDIRS='./pkg/term' hack/test/unit # set -eux -o pipefail BUILDFLAGS=(-tags 'netgo seccomp libdm_no_deferred_remove') TESTFLAGS+=" -test.timeout=${TIMEOUT:-5m}" TESTDIRS="${TESTDIRS:-./...}" exclude_paths='/vendor/|/integration' pkg_list=$(go list $TESTDIRS | grep -vE "($exclude_paths)") base_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings -v "/libnetwork" || :) libnetwork_pkg_list=$(echo "${pkg_list}" | grep --fixed-strings "/libnetwork" || :) echo "${libnetwork_pkg_list}" | grep --fixed-strings "libnetwork/drivers/bridge" \ && if ! type docker-proxy; then hack/make.sh binary-proxy install-proxy fi mkdir -p bundles if [ -n "${base_pkg_list}" ]; then gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile.out \ -covermode=atomic \ ${TESTFLAGS} \ ${base_pkg_list} fi if [ -n "${libnetwork_pkg_list}" ]; then # libnetwork tests invoke iptables, and cannot be run in parallel. Execute # tests within /libnetwork with '-p=1' to run them sequentially. See # https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details. gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \ "${BUILDFLAGS[@]}" \ -cover \ -coverprofile=bundles/profile-libnetwork.out \ -covermode=atomic \ -p=1 \ ${TESTFLAGS} \ ${libnetwork_pkg_list} fi
rvolosatovs
0c88b0dc8293ac075b9ee34575cb003ceedd2f2b
52af46671691dfb76772cbf6bac0f688e464fb5d
> Is there a guideline, which clearly defines how this is to be done? Where possible, we try to use POSIX (and avoid "bashisms") for portability, e.g. to allow some scripts to be run in an environment where `bash` may not be present, or where `/bin/sh` is not an alias for `bash` (which it is in many environments). Referring to the POSIX docs; https://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html > The two commands: > > ```bash > test "$1" > test ! "$1" > ``` > > could not be used reliably on some historical systems. Unexpected results would occur if such a string expression were used and `$1` expanded to `'!'`, `'('`, or a known unary primary. Better constructs are: > > ```bash > test -n "$1" > test -z "$1" > ``` So from the above, looks like `-n` or `-z` is the recommended approach.
thaJeztah
4,590
moby/moby
42,592
Dockerfile: update go-swagger to fix validation on Go1.16
depends on https://github.com/kolyshkin/go-swagger/pull/2 relates to https://github.com/moby/moby/pull/40353 ~temporarily using my fork to test this in CI~
null
2021-07-02 13:03:05+00:00
2021-07-03 13:37:35+00:00
Dockerfile
# syntax=docker/dockerfile:1.2 ARG CROSS="false" ARG SYSTEMD="false" # IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored ARG GO_VERSION=1.16.5 ARG DEBIAN_FRONTEND=noninteractive ARG VPNKIT_VERSION=0.5.0 ARG DOCKER_BUILDTAGS="apparmor seccomp" ARG BASE_DEBIAN_DISTRO="buster" ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}" FROM ${GOLANG_IMAGE} AS base RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache ARG APT_MIRROR RUN sed -ri "s/(httpredir|deb).debian.org/${APT_MIRROR:-deb.debian.org}/g" /etc/apt/sources.list \ && sed -ri "s/(security).debian.org/${APT_MIRROR:-security.debian.org}/g" /etc/apt/sources.list ENV GO111MODULE=off FROM base AS criu ARG DEBIAN_FRONTEND ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_10/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc # FIXME: temporarily doing a manual chmod as workaround for https://github.com/moby/buildkit/issues/2114 RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \ chmod 0644 /etc/apt/trusted.gpg.d/criu.gpg.asc \ && echo 'deb https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_10/ /' > /etc/apt/sources.list.d/criu.list \ && apt-get update \ && apt-get install -y --no-install-recommends criu \ && install -D /usr/sbin/criu /build/criu FROM base AS registry WORKDIR /go/src/github.com/docker/distribution # Install two versions of the registry. The first one is a recent version that # supports both schema 1 and 2 manifests. The second one is an older version that # only supports schema1 manifests. This allows integration-cli tests to cover # push/pull with both schema1 and schema2 manifests. # The old version of the registry is not working on arm64, so installation is # skipped on that architecture. ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827 RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=tmpfs,target=/go/src/ \ set -x \ && git clone https://github.com/docker/distribution.git . \ && git checkout -q "$REGISTRY_COMMIT" \ && GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \ go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \ && case $(dpkg --print-architecture) in \ amd64|armhf|ppc64*|s390x) \ git checkout -q "$REGISTRY_COMMIT_SCHEMA1"; \ GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \ go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \ ;; \ esac FROM base AS swagger WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger # Install go-swagger for validating swagger.yaml # This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix # TODO: move to under moby/ or fix upstream go-swagger to work for us. ENV GO_SWAGGER_COMMIT 5e6cb12f7c82ce78e45ba71fa6cb1928094db050 RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=tmpfs,target=/go/src/ \ set -x \ && git clone https://github.com/kolyshkin/go-swagger.git . \ && git checkout -q "$GO_SWAGGER_COMMIT" \ && go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images ARG DEBIAN_FRONTEND RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ ca-certificates \ curl \ jq # Get useful and necessary Hub images so we can "docker load" locally instead of pulling COPY contrib/download-frozen-image-v2.sh / ARG TARGETARCH RUN /download-frozen-image-v2.sh /build \ buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \ busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \ busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \ debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \ hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \ arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 # See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list) FROM base AS cross-false FROM --platform=linux/amd64 base AS cross-true ARG DEBIAN_FRONTEND RUN dpkg --add-architecture arm64 RUN dpkg --add-architecture armel RUN dpkg --add-architecture armhf RUN dpkg --add-architecture ppc64el RUN dpkg --add-architecture s390x RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ crossbuild-essential-arm64 \ crossbuild-essential-armel \ crossbuild-essential-armhf \ crossbuild-essential-ppc64el \ crossbuild-essential-s390x FROM cross-${CROSS} as dev-base FROM dev-base AS runtime-dev-cross-false ARG DEBIAN_FRONTEND RUN echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ binutils-mingw-w64 \ g++-mingw-w64-x86-64 \ libapparmor-dev \ libbtrfs-dev \ libdevmapper-dev \ libseccomp-dev/buster-backports \ libsystemd-dev \ libudev-dev FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true ARG DEBIAN_FRONTEND # These crossbuild packages rely on gcc-<arch>, but this doesn't want to install # on non-amd64 systems. # Additionally, the crossbuild-amd64 is currently only on debian:buster, so # other architectures cannnot crossbuild amd64. RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ libapparmor-dev:arm64 \ libapparmor-dev:armel \ libapparmor-dev:armhf \ libapparmor-dev:ppc64el \ libapparmor-dev:s390x FROM runtime-dev-cross-${CROSS} AS runtime-dev FROM base AS tomll ARG GOTOML_VERSION RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install/tomll.installer,target=/tmp/install/tomll.installer \ . /tmp/install/tomll.installer && PREFIX=/build install_tomll FROM base AS vndr ARG VNDR_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh vndr FROM dev-base AS containerd ARG DEBIAN_FRONTEND RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ libbtrfs-dev ARG CONTAINERD_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh containerd FROM base AS golangci_lint ARG GOLANGCI_LINT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh golangci_lint FROM base AS gotestsum ARG GOTESTSUM_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh gotestsum FROM base AS shfmt ARG SHFMT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh shfmt FROM dev-base AS dockercli ARG DOCKERCLI_CHANNEL ARG DOCKERCLI_VERSION RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh dockercli FROM runtime-dev AS runc ARG RUNC_COMMIT ARG RUNC_BUILDTAGS RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh runc FROM dev-base AS tini ARG DEBIAN_FRONTEND ARG TINI_COMMIT RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ cmake \ vim-common RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh tini FROM dev-base AS rootlesskit ARG ROOTLESSKIT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh rootlesskit COPY ./contrib/dockerd-rootless.sh /build COPY ./contrib/dockerd-rootless-setuptool.sh /build FROM --platform=amd64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-amd64 FROM --platform=arm64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-arm64 FROM scratch AS vpnkit COPY --from=vpnkit-amd64 /vpnkit /build/vpnkit.x86_64 COPY --from=vpnkit-arm64 /vpnkit /build/vpnkit.aarch64 # TODO: Some of this is only really needed for testing, it would be nice to split this up FROM runtime-dev AS dev-systemd-false ARG DEBIAN_FRONTEND RUN groupadd -r docker RUN useradd --create-home --gid docker unprivilegeduser \ && mkdir -p /home/unprivilegeduser/.local/share/docker \ && chown -R unprivilegeduser /home/unprivilegeduser # Let us use a .bashrc file RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc # Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker RUN ldconfig # This should only install packages that are specifically needed for the dev environment and nothing else # Do you really need to add another package here? Can it be done in a different build stage? RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ apparmor \ aufs-tools \ bash-completion \ bzip2 \ iptables \ jq \ libcap2-bin \ libnet1 \ libnl-3-200 \ libprotobuf-c1 \ net-tools \ patch \ pigz \ python3-pip \ python3-setuptools \ python3-wheel \ sudo \ thin-provisioning-tools \ uidmap \ vim \ vim-common \ xfsprogs \ xz-utils \ zip # Switch to use iptables instead of nftables (to match the CI hosts) # TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824) RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \ && update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \ && update-alternatives --set arptables /usr/sbin/arptables-legacy || true RUN pip3 install yamllint==1.26.1 COPY --from=dockercli /build/ /usr/local/cli COPY --from=frozen-images /build/ /docker-frozen-images COPY --from=swagger /build/ /usr/local/bin/ COPY --from=tomll /build/ /usr/local/bin/ COPY --from=tini /build/ /usr/local/bin/ COPY --from=registry /build/ /usr/local/bin/ COPY --from=criu /build/ /usr/local/bin/ COPY --from=vndr /build/ /usr/local/bin/ COPY --from=gotestsum /build/ /usr/local/bin/ COPY --from=golangci_lint /build/ /usr/local/bin/ COPY --from=shfmt /build/ /usr/local/bin/ COPY --from=runc /build/ /usr/local/bin/ COPY --from=containerd /build/ /usr/local/bin/ COPY --from=rootlesskit /build/ /usr/local/bin/ COPY --from=vpnkit /build/ /usr/local/bin/ ENV PATH=/usr/local/cli:$PATH ARG DOCKER_BUILDTAGS ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}" WORKDIR /go/src/github.com/docker/docker VOLUME /var/lib/docker VOLUME /home/unprivilegeduser/.local/share/docker # Wrap all commands in the "docker-in-docker" script to allow nested containers ENTRYPOINT ["hack/dind"] FROM dev-systemd-false AS dev-systemd-true RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ dbus \ dbus-user-session \ systemd \ systemd-sysv RUN mkdir -p hack \ && curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \ && chmod +x hack/dind-systemd ENTRYPOINT ["hack/dind-systemd"] FROM dev-systemd-${SYSTEMD} AS dev FROM runtime-dev AS binary-base ARG DOCKER_GITCOMMIT=HEAD ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT} ARG VERSION ENV VERSION=${VERSION} ARG PLATFORM ENV PLATFORM=${PLATFORM} ARG PRODUCT ENV PRODUCT=${PRODUCT} ARG DEFAULT_PRODUCT_LICENSE ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE} ARG DOCKER_BUILDTAGS ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}" ENV PREFIX=/build # TODO: This is here because hack/make.sh binary copies these extras binaries # from $PATH into the bundles dir. # It would be nice to handle this in a different way. COPY --from=tini /build/ /usr/local/bin/ COPY --from=runc /build/ /usr/local/bin/ COPY --from=containerd /build/ /usr/local/bin/ COPY --from=rootlesskit /build/ /usr/local/bin/ COPY --from=vpnkit /build/ /usr/local/bin/ WORKDIR /go/src/github.com/docker/docker FROM binary-base AS build-binary RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ hack/make.sh binary FROM binary-base AS build-dynbinary RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ hack/make.sh dynbinary FROM binary-base AS build-cross ARG DOCKER_CROSSPLATFORMS RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ --mount=type=tmpfs,target=/go/src/github.com/docker/docker/autogen \ hack/make.sh cross FROM scratch AS binary COPY --from=build-binary /build/bundles/ / FROM scratch AS dynbinary COPY --from=build-dynbinary /build/bundles/ / FROM scratch AS cross COPY --from=build-cross /build/bundles/ / FROM dev AS final COPY . /go/src/github.com/docker/docker
# syntax=docker/dockerfile:1.2 ARG CROSS="false" ARG SYSTEMD="false" # IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored ARG GO_VERSION=1.16.5 ARG DEBIAN_FRONTEND=noninteractive ARG VPNKIT_VERSION=0.5.0 ARG DOCKER_BUILDTAGS="apparmor seccomp" ARG BASE_DEBIAN_DISTRO="buster" ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}" FROM ${GOLANG_IMAGE} AS base RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache ARG APT_MIRROR RUN sed -ri "s/(httpredir|deb).debian.org/${APT_MIRROR:-deb.debian.org}/g" /etc/apt/sources.list \ && sed -ri "s/(security).debian.org/${APT_MIRROR:-security.debian.org}/g" /etc/apt/sources.list ENV GO111MODULE=off FROM base AS criu ARG DEBIAN_FRONTEND ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_10/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc # FIXME: temporarily doing a manual chmod as workaround for https://github.com/moby/buildkit/issues/2114 RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \ chmod 0644 /etc/apt/trusted.gpg.d/criu.gpg.asc \ && echo 'deb https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_10/ /' > /etc/apt/sources.list.d/criu.list \ && apt-get update \ && apt-get install -y --no-install-recommends criu \ && install -D /usr/sbin/criu /build/criu FROM base AS registry WORKDIR /go/src/github.com/docker/distribution # Install two versions of the registry. The first one is a recent version that # supports both schema 1 and 2 manifests. The second one is an older version that # only supports schema1 manifests. This allows integration-cli tests to cover # push/pull with both schema1 and schema2 manifests. # The old version of the registry is not working on arm64, so installation is # skipped on that architecture. ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827 RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=tmpfs,target=/go/src/ \ set -x \ && git clone https://github.com/docker/distribution.git . \ && git checkout -q "$REGISTRY_COMMIT" \ && GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \ go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \ && case $(dpkg --print-architecture) in \ amd64|armhf|ppc64*|s390x) \ git checkout -q "$REGISTRY_COMMIT_SCHEMA1"; \ GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \ go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \ ;; \ esac FROM base AS swagger WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger # Install go-swagger for validating swagger.yaml # This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix # TODO: move to under moby/ or fix upstream go-swagger to work for us. ENV GO_SWAGGER_COMMIT c56166c036004ba7a3a321e5951ba472b9ae298c RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=tmpfs,target=/go/src/ \ set -x \ && git clone https://github.com/kolyshkin/go-swagger.git . \ && git checkout -q "$GO_SWAGGER_COMMIT" \ && go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images ARG DEBIAN_FRONTEND RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ ca-certificates \ curl \ jq # Get useful and necessary Hub images so we can "docker load" locally instead of pulling COPY contrib/download-frozen-image-v2.sh / ARG TARGETARCH RUN /download-frozen-image-v2.sh /build \ buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \ busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \ busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \ debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \ hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \ arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 # See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list) FROM base AS cross-false FROM --platform=linux/amd64 base AS cross-true ARG DEBIAN_FRONTEND RUN dpkg --add-architecture arm64 RUN dpkg --add-architecture armel RUN dpkg --add-architecture armhf RUN dpkg --add-architecture ppc64el RUN dpkg --add-architecture s390x RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ crossbuild-essential-arm64 \ crossbuild-essential-armel \ crossbuild-essential-armhf \ crossbuild-essential-ppc64el \ crossbuild-essential-s390x FROM cross-${CROSS} as dev-base FROM dev-base AS runtime-dev-cross-false ARG DEBIAN_FRONTEND RUN echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ binutils-mingw-w64 \ g++-mingw-w64-x86-64 \ libapparmor-dev \ libbtrfs-dev \ libdevmapper-dev \ libseccomp-dev/buster-backports \ libsystemd-dev \ libudev-dev FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true ARG DEBIAN_FRONTEND # These crossbuild packages rely on gcc-<arch>, but this doesn't want to install # on non-amd64 systems. # Additionally, the crossbuild-amd64 is currently only on debian:buster, so # other architectures cannnot crossbuild amd64. RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ libapparmor-dev:arm64 \ libapparmor-dev:armel \ libapparmor-dev:armhf \ libapparmor-dev:ppc64el \ libapparmor-dev:s390x FROM runtime-dev-cross-${CROSS} AS runtime-dev FROM base AS tomll ARG GOTOML_VERSION RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install/tomll.installer,target=/tmp/install/tomll.installer \ . /tmp/install/tomll.installer && PREFIX=/build install_tomll FROM base AS vndr ARG VNDR_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh vndr FROM dev-base AS containerd ARG DEBIAN_FRONTEND RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ libbtrfs-dev ARG CONTAINERD_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh containerd FROM base AS golangci_lint ARG GOLANGCI_LINT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh golangci_lint FROM base AS gotestsum ARG GOTESTSUM_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh gotestsum FROM base AS shfmt ARG SHFMT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh shfmt FROM dev-base AS dockercli ARG DOCKERCLI_CHANNEL ARG DOCKERCLI_VERSION RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh dockercli FROM runtime-dev AS runc ARG RUNC_COMMIT ARG RUNC_BUILDTAGS RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh runc FROM dev-base AS tini ARG DEBIAN_FRONTEND ARG TINI_COMMIT RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ cmake \ vim-common RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh tini FROM dev-base AS rootlesskit ARG ROOTLESSKIT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh rootlesskit COPY ./contrib/dockerd-rootless.sh /build COPY ./contrib/dockerd-rootless-setuptool.sh /build FROM --platform=amd64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-amd64 FROM --platform=arm64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-arm64 FROM scratch AS vpnkit COPY --from=vpnkit-amd64 /vpnkit /build/vpnkit.x86_64 COPY --from=vpnkit-arm64 /vpnkit /build/vpnkit.aarch64 # TODO: Some of this is only really needed for testing, it would be nice to split this up FROM runtime-dev AS dev-systemd-false ARG DEBIAN_FRONTEND RUN groupadd -r docker RUN useradd --create-home --gid docker unprivilegeduser \ && mkdir -p /home/unprivilegeduser/.local/share/docker \ && chown -R unprivilegeduser /home/unprivilegeduser # Let us use a .bashrc file RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc # Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker RUN ldconfig # This should only install packages that are specifically needed for the dev environment and nothing else # Do you really need to add another package here? Can it be done in a different build stage? RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ apparmor \ aufs-tools \ bash-completion \ bzip2 \ iptables \ jq \ libcap2-bin \ libnet1 \ libnl-3-200 \ libprotobuf-c1 \ net-tools \ patch \ pigz \ python3-pip \ python3-setuptools \ python3-wheel \ sudo \ thin-provisioning-tools \ uidmap \ vim \ vim-common \ xfsprogs \ xz-utils \ zip # Switch to use iptables instead of nftables (to match the CI hosts) # TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824) RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \ && update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \ && update-alternatives --set arptables /usr/sbin/arptables-legacy || true RUN pip3 install yamllint==1.26.1 COPY --from=dockercli /build/ /usr/local/cli COPY --from=frozen-images /build/ /docker-frozen-images COPY --from=swagger /build/ /usr/local/bin/ COPY --from=tomll /build/ /usr/local/bin/ COPY --from=tini /build/ /usr/local/bin/ COPY --from=registry /build/ /usr/local/bin/ COPY --from=criu /build/ /usr/local/bin/ COPY --from=vndr /build/ /usr/local/bin/ COPY --from=gotestsum /build/ /usr/local/bin/ COPY --from=golangci_lint /build/ /usr/local/bin/ COPY --from=shfmt /build/ /usr/local/bin/ COPY --from=runc /build/ /usr/local/bin/ COPY --from=containerd /build/ /usr/local/bin/ COPY --from=rootlesskit /build/ /usr/local/bin/ COPY --from=vpnkit /build/ /usr/local/bin/ ENV PATH=/usr/local/cli:$PATH ARG DOCKER_BUILDTAGS ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}" WORKDIR /go/src/github.com/docker/docker VOLUME /var/lib/docker VOLUME /home/unprivilegeduser/.local/share/docker # Wrap all commands in the "docker-in-docker" script to allow nested containers ENTRYPOINT ["hack/dind"] FROM dev-systemd-false AS dev-systemd-true RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ dbus \ dbus-user-session \ systemd \ systemd-sysv RUN mkdir -p hack \ && curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \ && chmod +x hack/dind-systemd ENTRYPOINT ["hack/dind-systemd"] FROM dev-systemd-${SYSTEMD} AS dev FROM runtime-dev AS binary-base ARG DOCKER_GITCOMMIT=HEAD ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT} ARG VERSION ENV VERSION=${VERSION} ARG PLATFORM ENV PLATFORM=${PLATFORM} ARG PRODUCT ENV PRODUCT=${PRODUCT} ARG DEFAULT_PRODUCT_LICENSE ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE} ARG DOCKER_BUILDTAGS ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}" ENV PREFIX=/build # TODO: This is here because hack/make.sh binary copies these extras binaries # from $PATH into the bundles dir. # It would be nice to handle this in a different way. COPY --from=tini /build/ /usr/local/bin/ COPY --from=runc /build/ /usr/local/bin/ COPY --from=containerd /build/ /usr/local/bin/ COPY --from=rootlesskit /build/ /usr/local/bin/ COPY --from=vpnkit /build/ /usr/local/bin/ WORKDIR /go/src/github.com/docker/docker FROM binary-base AS build-binary RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ hack/make.sh binary FROM binary-base AS build-dynbinary RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ hack/make.sh dynbinary FROM binary-base AS build-cross ARG DOCKER_CROSSPLATFORMS RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ --mount=type=tmpfs,target=/go/src/github.com/docker/docker/autogen \ hack/make.sh cross FROM scratch AS binary COPY --from=build-binary /build/bundles/ / FROM scratch AS dynbinary COPY --from=build-dynbinary /build/bundles/ / FROM scratch AS cross COPY --from=build-cross /build/bundles/ / FROM dev AS final COPY . /go/src/github.com/docker/docker
thaJeztah
45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc
3f53b2ef7fab1e72a0368ab53c4ed993f0f5653f
It's hurting my head to read (because I do not Swagger nearly enough), but do you think it's worth adding a link to https://github.com/go-swagger/go-swagger/issues/2077 in the comments here? (there are some suggestions in there that seem like they'd be good fodder for a "help wanted" if someone wanted to pitch in here, so it seems like a shame _not_ to link to :sweat_smile:)
tianon
4,591
moby/moby
42,592
Dockerfile: update go-swagger to fix validation on Go1.16
depends on https://github.com/kolyshkin/go-swagger/pull/2 relates to https://github.com/moby/moby/pull/40353 ~temporarily using my fork to test this in CI~
null
2021-07-02 13:03:05+00:00
2021-07-03 13:37:35+00:00
Dockerfile
# syntax=docker/dockerfile:1.2 ARG CROSS="false" ARG SYSTEMD="false" # IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored ARG GO_VERSION=1.16.5 ARG DEBIAN_FRONTEND=noninteractive ARG VPNKIT_VERSION=0.5.0 ARG DOCKER_BUILDTAGS="apparmor seccomp" ARG BASE_DEBIAN_DISTRO="buster" ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}" FROM ${GOLANG_IMAGE} AS base RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache ARG APT_MIRROR RUN sed -ri "s/(httpredir|deb).debian.org/${APT_MIRROR:-deb.debian.org}/g" /etc/apt/sources.list \ && sed -ri "s/(security).debian.org/${APT_MIRROR:-security.debian.org}/g" /etc/apt/sources.list ENV GO111MODULE=off FROM base AS criu ARG DEBIAN_FRONTEND ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_10/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc # FIXME: temporarily doing a manual chmod as workaround for https://github.com/moby/buildkit/issues/2114 RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \ chmod 0644 /etc/apt/trusted.gpg.d/criu.gpg.asc \ && echo 'deb https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_10/ /' > /etc/apt/sources.list.d/criu.list \ && apt-get update \ && apt-get install -y --no-install-recommends criu \ && install -D /usr/sbin/criu /build/criu FROM base AS registry WORKDIR /go/src/github.com/docker/distribution # Install two versions of the registry. The first one is a recent version that # supports both schema 1 and 2 manifests. The second one is an older version that # only supports schema1 manifests. This allows integration-cli tests to cover # push/pull with both schema1 and schema2 manifests. # The old version of the registry is not working on arm64, so installation is # skipped on that architecture. ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827 RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=tmpfs,target=/go/src/ \ set -x \ && git clone https://github.com/docker/distribution.git . \ && git checkout -q "$REGISTRY_COMMIT" \ && GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \ go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \ && case $(dpkg --print-architecture) in \ amd64|armhf|ppc64*|s390x) \ git checkout -q "$REGISTRY_COMMIT_SCHEMA1"; \ GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \ go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \ ;; \ esac FROM base AS swagger WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger # Install go-swagger for validating swagger.yaml # This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix # TODO: move to under moby/ or fix upstream go-swagger to work for us. ENV GO_SWAGGER_COMMIT 5e6cb12f7c82ce78e45ba71fa6cb1928094db050 RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=tmpfs,target=/go/src/ \ set -x \ && git clone https://github.com/kolyshkin/go-swagger.git . \ && git checkout -q "$GO_SWAGGER_COMMIT" \ && go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images ARG DEBIAN_FRONTEND RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ ca-certificates \ curl \ jq # Get useful and necessary Hub images so we can "docker load" locally instead of pulling COPY contrib/download-frozen-image-v2.sh / ARG TARGETARCH RUN /download-frozen-image-v2.sh /build \ buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \ busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \ busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \ debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \ hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \ arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 # See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list) FROM base AS cross-false FROM --platform=linux/amd64 base AS cross-true ARG DEBIAN_FRONTEND RUN dpkg --add-architecture arm64 RUN dpkg --add-architecture armel RUN dpkg --add-architecture armhf RUN dpkg --add-architecture ppc64el RUN dpkg --add-architecture s390x RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ crossbuild-essential-arm64 \ crossbuild-essential-armel \ crossbuild-essential-armhf \ crossbuild-essential-ppc64el \ crossbuild-essential-s390x FROM cross-${CROSS} as dev-base FROM dev-base AS runtime-dev-cross-false ARG DEBIAN_FRONTEND RUN echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ binutils-mingw-w64 \ g++-mingw-w64-x86-64 \ libapparmor-dev \ libbtrfs-dev \ libdevmapper-dev \ libseccomp-dev/buster-backports \ libsystemd-dev \ libudev-dev FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true ARG DEBIAN_FRONTEND # These crossbuild packages rely on gcc-<arch>, but this doesn't want to install # on non-amd64 systems. # Additionally, the crossbuild-amd64 is currently only on debian:buster, so # other architectures cannnot crossbuild amd64. RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ libapparmor-dev:arm64 \ libapparmor-dev:armel \ libapparmor-dev:armhf \ libapparmor-dev:ppc64el \ libapparmor-dev:s390x FROM runtime-dev-cross-${CROSS} AS runtime-dev FROM base AS tomll ARG GOTOML_VERSION RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install/tomll.installer,target=/tmp/install/tomll.installer \ . /tmp/install/tomll.installer && PREFIX=/build install_tomll FROM base AS vndr ARG VNDR_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh vndr FROM dev-base AS containerd ARG DEBIAN_FRONTEND RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ libbtrfs-dev ARG CONTAINERD_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh containerd FROM base AS golangci_lint ARG GOLANGCI_LINT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh golangci_lint FROM base AS gotestsum ARG GOTESTSUM_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh gotestsum FROM base AS shfmt ARG SHFMT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh shfmt FROM dev-base AS dockercli ARG DOCKERCLI_CHANNEL ARG DOCKERCLI_VERSION RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh dockercli FROM runtime-dev AS runc ARG RUNC_COMMIT ARG RUNC_BUILDTAGS RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh runc FROM dev-base AS tini ARG DEBIAN_FRONTEND ARG TINI_COMMIT RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ cmake \ vim-common RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh tini FROM dev-base AS rootlesskit ARG ROOTLESSKIT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh rootlesskit COPY ./contrib/dockerd-rootless.sh /build COPY ./contrib/dockerd-rootless-setuptool.sh /build FROM --platform=amd64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-amd64 FROM --platform=arm64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-arm64 FROM scratch AS vpnkit COPY --from=vpnkit-amd64 /vpnkit /build/vpnkit.x86_64 COPY --from=vpnkit-arm64 /vpnkit /build/vpnkit.aarch64 # TODO: Some of this is only really needed for testing, it would be nice to split this up FROM runtime-dev AS dev-systemd-false ARG DEBIAN_FRONTEND RUN groupadd -r docker RUN useradd --create-home --gid docker unprivilegeduser \ && mkdir -p /home/unprivilegeduser/.local/share/docker \ && chown -R unprivilegeduser /home/unprivilegeduser # Let us use a .bashrc file RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc # Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker RUN ldconfig # This should only install packages that are specifically needed for the dev environment and nothing else # Do you really need to add another package here? Can it be done in a different build stage? RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ apparmor \ aufs-tools \ bash-completion \ bzip2 \ iptables \ jq \ libcap2-bin \ libnet1 \ libnl-3-200 \ libprotobuf-c1 \ net-tools \ patch \ pigz \ python3-pip \ python3-setuptools \ python3-wheel \ sudo \ thin-provisioning-tools \ uidmap \ vim \ vim-common \ xfsprogs \ xz-utils \ zip # Switch to use iptables instead of nftables (to match the CI hosts) # TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824) RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \ && update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \ && update-alternatives --set arptables /usr/sbin/arptables-legacy || true RUN pip3 install yamllint==1.26.1 COPY --from=dockercli /build/ /usr/local/cli COPY --from=frozen-images /build/ /docker-frozen-images COPY --from=swagger /build/ /usr/local/bin/ COPY --from=tomll /build/ /usr/local/bin/ COPY --from=tini /build/ /usr/local/bin/ COPY --from=registry /build/ /usr/local/bin/ COPY --from=criu /build/ /usr/local/bin/ COPY --from=vndr /build/ /usr/local/bin/ COPY --from=gotestsum /build/ /usr/local/bin/ COPY --from=golangci_lint /build/ /usr/local/bin/ COPY --from=shfmt /build/ /usr/local/bin/ COPY --from=runc /build/ /usr/local/bin/ COPY --from=containerd /build/ /usr/local/bin/ COPY --from=rootlesskit /build/ /usr/local/bin/ COPY --from=vpnkit /build/ /usr/local/bin/ ENV PATH=/usr/local/cli:$PATH ARG DOCKER_BUILDTAGS ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}" WORKDIR /go/src/github.com/docker/docker VOLUME /var/lib/docker VOLUME /home/unprivilegeduser/.local/share/docker # Wrap all commands in the "docker-in-docker" script to allow nested containers ENTRYPOINT ["hack/dind"] FROM dev-systemd-false AS dev-systemd-true RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ dbus \ dbus-user-session \ systemd \ systemd-sysv RUN mkdir -p hack \ && curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \ && chmod +x hack/dind-systemd ENTRYPOINT ["hack/dind-systemd"] FROM dev-systemd-${SYSTEMD} AS dev FROM runtime-dev AS binary-base ARG DOCKER_GITCOMMIT=HEAD ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT} ARG VERSION ENV VERSION=${VERSION} ARG PLATFORM ENV PLATFORM=${PLATFORM} ARG PRODUCT ENV PRODUCT=${PRODUCT} ARG DEFAULT_PRODUCT_LICENSE ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE} ARG DOCKER_BUILDTAGS ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}" ENV PREFIX=/build # TODO: This is here because hack/make.sh binary copies these extras binaries # from $PATH into the bundles dir. # It would be nice to handle this in a different way. COPY --from=tini /build/ /usr/local/bin/ COPY --from=runc /build/ /usr/local/bin/ COPY --from=containerd /build/ /usr/local/bin/ COPY --from=rootlesskit /build/ /usr/local/bin/ COPY --from=vpnkit /build/ /usr/local/bin/ WORKDIR /go/src/github.com/docker/docker FROM binary-base AS build-binary RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ hack/make.sh binary FROM binary-base AS build-dynbinary RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ hack/make.sh dynbinary FROM binary-base AS build-cross ARG DOCKER_CROSSPLATFORMS RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ --mount=type=tmpfs,target=/go/src/github.com/docker/docker/autogen \ hack/make.sh cross FROM scratch AS binary COPY --from=build-binary /build/bundles/ / FROM scratch AS dynbinary COPY --from=build-dynbinary /build/bundles/ / FROM scratch AS cross COPY --from=build-cross /build/bundles/ / FROM dev AS final COPY . /go/src/github.com/docker/docker
# syntax=docker/dockerfile:1.2 ARG CROSS="false" ARG SYSTEMD="false" # IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored ARG GO_VERSION=1.16.5 ARG DEBIAN_FRONTEND=noninteractive ARG VPNKIT_VERSION=0.5.0 ARG DOCKER_BUILDTAGS="apparmor seccomp" ARG BASE_DEBIAN_DISTRO="buster" ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}" FROM ${GOLANG_IMAGE} AS base RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache ARG APT_MIRROR RUN sed -ri "s/(httpredir|deb).debian.org/${APT_MIRROR:-deb.debian.org}/g" /etc/apt/sources.list \ && sed -ri "s/(security).debian.org/${APT_MIRROR:-security.debian.org}/g" /etc/apt/sources.list ENV GO111MODULE=off FROM base AS criu ARG DEBIAN_FRONTEND ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_10/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc # FIXME: temporarily doing a manual chmod as workaround for https://github.com/moby/buildkit/issues/2114 RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \ chmod 0644 /etc/apt/trusted.gpg.d/criu.gpg.asc \ && echo 'deb https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_10/ /' > /etc/apt/sources.list.d/criu.list \ && apt-get update \ && apt-get install -y --no-install-recommends criu \ && install -D /usr/sbin/criu /build/criu FROM base AS registry WORKDIR /go/src/github.com/docker/distribution # Install two versions of the registry. The first one is a recent version that # supports both schema 1 and 2 manifests. The second one is an older version that # only supports schema1 manifests. This allows integration-cli tests to cover # push/pull with both schema1 and schema2 manifests. # The old version of the registry is not working on arm64, so installation is # skipped on that architecture. ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827 RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=tmpfs,target=/go/src/ \ set -x \ && git clone https://github.com/docker/distribution.git . \ && git checkout -q "$REGISTRY_COMMIT" \ && GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \ go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \ && case $(dpkg --print-architecture) in \ amd64|armhf|ppc64*|s390x) \ git checkout -q "$REGISTRY_COMMIT_SCHEMA1"; \ GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \ go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \ ;; \ esac FROM base AS swagger WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger # Install go-swagger for validating swagger.yaml # This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix # TODO: move to under moby/ or fix upstream go-swagger to work for us. ENV GO_SWAGGER_COMMIT c56166c036004ba7a3a321e5951ba472b9ae298c RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=tmpfs,target=/go/src/ \ set -x \ && git clone https://github.com/kolyshkin/go-swagger.git . \ && git checkout -q "$GO_SWAGGER_COMMIT" \ && go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images ARG DEBIAN_FRONTEND RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ ca-certificates \ curl \ jq # Get useful and necessary Hub images so we can "docker load" locally instead of pulling COPY contrib/download-frozen-image-v2.sh / ARG TARGETARCH RUN /download-frozen-image-v2.sh /build \ buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \ busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \ busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \ debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \ hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \ arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 # See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list) FROM base AS cross-false FROM --platform=linux/amd64 base AS cross-true ARG DEBIAN_FRONTEND RUN dpkg --add-architecture arm64 RUN dpkg --add-architecture armel RUN dpkg --add-architecture armhf RUN dpkg --add-architecture ppc64el RUN dpkg --add-architecture s390x RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ crossbuild-essential-arm64 \ crossbuild-essential-armel \ crossbuild-essential-armhf \ crossbuild-essential-ppc64el \ crossbuild-essential-s390x FROM cross-${CROSS} as dev-base FROM dev-base AS runtime-dev-cross-false ARG DEBIAN_FRONTEND RUN echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ binutils-mingw-w64 \ g++-mingw-w64-x86-64 \ libapparmor-dev \ libbtrfs-dev \ libdevmapper-dev \ libseccomp-dev/buster-backports \ libsystemd-dev \ libudev-dev FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true ARG DEBIAN_FRONTEND # These crossbuild packages rely on gcc-<arch>, but this doesn't want to install # on non-amd64 systems. # Additionally, the crossbuild-amd64 is currently only on debian:buster, so # other architectures cannnot crossbuild amd64. RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ libapparmor-dev:arm64 \ libapparmor-dev:armel \ libapparmor-dev:armhf \ libapparmor-dev:ppc64el \ libapparmor-dev:s390x FROM runtime-dev-cross-${CROSS} AS runtime-dev FROM base AS tomll ARG GOTOML_VERSION RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install/tomll.installer,target=/tmp/install/tomll.installer \ . /tmp/install/tomll.installer && PREFIX=/build install_tomll FROM base AS vndr ARG VNDR_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh vndr FROM dev-base AS containerd ARG DEBIAN_FRONTEND RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ libbtrfs-dev ARG CONTAINERD_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh containerd FROM base AS golangci_lint ARG GOLANGCI_LINT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh golangci_lint FROM base AS gotestsum ARG GOTESTSUM_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh gotestsum FROM base AS shfmt ARG SHFMT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh shfmt FROM dev-base AS dockercli ARG DOCKERCLI_CHANNEL ARG DOCKERCLI_VERSION RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh dockercli FROM runtime-dev AS runc ARG RUNC_COMMIT ARG RUNC_BUILDTAGS RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh runc FROM dev-base AS tini ARG DEBIAN_FRONTEND ARG TINI_COMMIT RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ cmake \ vim-common RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh tini FROM dev-base AS rootlesskit ARG ROOTLESSKIT_COMMIT RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ --mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \ PREFIX=/build /tmp/install/install.sh rootlesskit COPY ./contrib/dockerd-rootless.sh /build COPY ./contrib/dockerd-rootless-setuptool.sh /build FROM --platform=amd64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-amd64 FROM --platform=arm64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-arm64 FROM scratch AS vpnkit COPY --from=vpnkit-amd64 /vpnkit /build/vpnkit.x86_64 COPY --from=vpnkit-arm64 /vpnkit /build/vpnkit.aarch64 # TODO: Some of this is only really needed for testing, it would be nice to split this up FROM runtime-dev AS dev-systemd-false ARG DEBIAN_FRONTEND RUN groupadd -r docker RUN useradd --create-home --gid docker unprivilegeduser \ && mkdir -p /home/unprivilegeduser/.local/share/docker \ && chown -R unprivilegeduser /home/unprivilegeduser # Let us use a .bashrc file RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc # Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker RUN ldconfig # This should only install packages that are specifically needed for the dev environment and nothing else # Do you really need to add another package here? Can it be done in a different build stage? RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ apparmor \ aufs-tools \ bash-completion \ bzip2 \ iptables \ jq \ libcap2-bin \ libnet1 \ libnl-3-200 \ libprotobuf-c1 \ net-tools \ patch \ pigz \ python3-pip \ python3-setuptools \ python3-wheel \ sudo \ thin-provisioning-tools \ uidmap \ vim \ vim-common \ xfsprogs \ xz-utils \ zip # Switch to use iptables instead of nftables (to match the CI hosts) # TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824) RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \ && update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \ && update-alternatives --set arptables /usr/sbin/arptables-legacy || true RUN pip3 install yamllint==1.26.1 COPY --from=dockercli /build/ /usr/local/cli COPY --from=frozen-images /build/ /docker-frozen-images COPY --from=swagger /build/ /usr/local/bin/ COPY --from=tomll /build/ /usr/local/bin/ COPY --from=tini /build/ /usr/local/bin/ COPY --from=registry /build/ /usr/local/bin/ COPY --from=criu /build/ /usr/local/bin/ COPY --from=vndr /build/ /usr/local/bin/ COPY --from=gotestsum /build/ /usr/local/bin/ COPY --from=golangci_lint /build/ /usr/local/bin/ COPY --from=shfmt /build/ /usr/local/bin/ COPY --from=runc /build/ /usr/local/bin/ COPY --from=containerd /build/ /usr/local/bin/ COPY --from=rootlesskit /build/ /usr/local/bin/ COPY --from=vpnkit /build/ /usr/local/bin/ ENV PATH=/usr/local/cli:$PATH ARG DOCKER_BUILDTAGS ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}" WORKDIR /go/src/github.com/docker/docker VOLUME /var/lib/docker VOLUME /home/unprivilegeduser/.local/share/docker # Wrap all commands in the "docker-in-docker" script to allow nested containers ENTRYPOINT ["hack/dind"] FROM dev-systemd-false AS dev-systemd-true RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ --mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \ apt-get update && apt-get install -y --no-install-recommends \ dbus \ dbus-user-session \ systemd \ systemd-sysv RUN mkdir -p hack \ && curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \ && chmod +x hack/dind-systemd ENTRYPOINT ["hack/dind-systemd"] FROM dev-systemd-${SYSTEMD} AS dev FROM runtime-dev AS binary-base ARG DOCKER_GITCOMMIT=HEAD ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT} ARG VERSION ENV VERSION=${VERSION} ARG PLATFORM ENV PLATFORM=${PLATFORM} ARG PRODUCT ENV PRODUCT=${PRODUCT} ARG DEFAULT_PRODUCT_LICENSE ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE} ARG DOCKER_BUILDTAGS ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}" ENV PREFIX=/build # TODO: This is here because hack/make.sh binary copies these extras binaries # from $PATH into the bundles dir. # It would be nice to handle this in a different way. COPY --from=tini /build/ /usr/local/bin/ COPY --from=runc /build/ /usr/local/bin/ COPY --from=containerd /build/ /usr/local/bin/ COPY --from=rootlesskit /build/ /usr/local/bin/ COPY --from=vpnkit /build/ /usr/local/bin/ WORKDIR /go/src/github.com/docker/docker FROM binary-base AS build-binary RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ hack/make.sh binary FROM binary-base AS build-dynbinary RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ hack/make.sh dynbinary FROM binary-base AS build-cross ARG DOCKER_CROSSPLATFORMS RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=bind,target=/go/src/github.com/docker/docker \ --mount=type=tmpfs,target=/go/src/github.com/docker/docker/autogen \ hack/make.sh cross FROM scratch AS binary COPY --from=build-binary /build/bundles/ / FROM scratch AS dynbinary COPY --from=build-dynbinary /build/bundles/ / FROM scratch AS cross COPY --from=build-cross /build/bundles/ / FROM dev AS final COPY . /go/src/github.com/docker/docker
thaJeztah
45b45ad65b6f4a76f2bd0f768ef6edb10ebcc0bc
3f53b2ef7fab1e72a0368ab53c4ed993f0f5653f
I created https://github.com/moby/moby/issues/42593 earlier today, and want to look if I can get someone to work on that in our next sprint
thaJeztah
4,592
moby/moby
42,569
libnetwork: processEndpointCreate: Fix deadlock between getSvcRecords and processEndpointCreate
Another similar case occured with a fixed build. Removed holding multiple locks for processEndpointCreate aswell. In terms of lock hierarchy I also thought about locking `n.ctrlr.Lock()` before `n.Lock()` in `getSvcRecords()` to possibly prevent other cases where the locking can't be split like here (e.g. both looks need to be held). But that might be not wanted because of data races (accessing networks data before its locked). If the controller of a network cant change, it probably would be prefered though. References https://github.com/moby/moby/pull/42545 @thaJeztah
null
2021-06-28 07:03:11+00:00
2021-06-30 09:09:09+00:00
libnetwork/store.go
package libnetwork import ( "fmt" "strings" "github.com/docker/docker/libnetwork/datastore" "github.com/docker/libkv/store/boltdb" "github.com/docker/libkv/store/consul" "github.com/docker/libkv/store/etcd" "github.com/docker/libkv/store/zookeeper" "github.com/sirupsen/logrus" ) func registerKVStores() { consul.Register() zookeeper.Register() etcd.Register() boltdb.Register() } func (c *controller) initScopedStore(scope string, scfg *datastore.ScopeCfg) error { store, err := datastore.NewDataStore(scope, scfg) if err != nil { return err } c.Lock() c.stores = append(c.stores, store) c.Unlock() return nil } func (c *controller) initStores() error { registerKVStores() c.Lock() if c.cfg == nil { c.Unlock() return nil } scopeConfigs := c.cfg.Scopes c.stores = nil c.Unlock() for scope, scfg := range scopeConfigs { if err := c.initScopedStore(scope, scfg); err != nil { return err } } c.startWatch() return nil } func (c *controller) closeStores() { for _, store := range c.getStores() { store.Close() } } func (c *controller) getStore(scope string) datastore.DataStore { c.Lock() defer c.Unlock() for _, store := range c.stores { if store.Scope() == scope { return store } } return nil } func (c *controller) getStores() []datastore.DataStore { c.Lock() defer c.Unlock() return c.stores } func (c *controller) getNetworkFromStore(nid string) (*network, error) { for _, n := range c.getNetworksFromStore() { if n.id == nid { return n, nil } } return nil, ErrNoSuchNetwork(nid) } func (c *controller) getNetworksForScope(scope string) ([]*network, error) { var nl []*network store := c.getStore(scope) if store == nil { return nil, nil } kvol, err := store.List(datastore.Key(datastore.NetworkKeyPrefix), &network{ctrlr: c}) if err != nil && err != datastore.ErrKeyNotFound { return nil, fmt.Errorf("failed to get networks for scope %s: %v", scope, err) } for _, kvo := range kvol { n := kvo.(*network) n.ctrlr = c ec := &endpointCnt{n: n} err = store.GetObject(datastore.Key(ec.Key()...), ec) if err != nil && !n.inDelete { logrus.Warnf("Could not find endpoint count key %s for network %s while listing: %v", datastore.Key(ec.Key()...), n.Name(), err) continue } n.epCnt = ec if n.scope == "" { n.scope = scope } nl = append(nl, n) } return nl, nil } func (c *controller) getNetworksFromStore() []*network { var nl []*network for _, store := range c.getStores() { kvol, err := store.List(datastore.Key(datastore.NetworkKeyPrefix), &network{ctrlr: c}) // Continue searching in the next store if no keys found in this store if err != nil { if err != datastore.ErrKeyNotFound { logrus.Debugf("failed to get networks for scope %s: %v", store.Scope(), err) } continue } kvep, err := store.Map(datastore.Key(epCntKeyPrefix), &endpointCnt{}) if err != nil && err != datastore.ErrKeyNotFound { logrus.Warnf("failed to get endpoint_count map for scope %s: %v", store.Scope(), err) } for _, kvo := range kvol { n := kvo.(*network) n.Lock() n.ctrlr = c ec := &endpointCnt{n: n} // Trim the leading & trailing "/" to make it consistent across all stores if val, ok := kvep[strings.Trim(datastore.Key(ec.Key()...), "/")]; ok { ec = val.(*endpointCnt) ec.n = n n.epCnt = ec } if n.scope == "" { n.scope = store.Scope() } n.Unlock() nl = append(nl, n) } } return nl } func (n *network) getEndpointFromStore(eid string) (*endpoint, error) { var errors []string for _, store := range n.ctrlr.getStores() { ep := &endpoint{id: eid, network: n} err := store.GetObject(datastore.Key(ep.Key()...), ep) // Continue searching in the next store if the key is not found in this store if err != nil { if err != datastore.ErrKeyNotFound { errors = append(errors, fmt.Sprintf("{%s:%v}, ", store.Scope(), err)) logrus.Debugf("could not find endpoint %s in %s: %v", eid, store.Scope(), err) } continue } return ep, nil } return nil, fmt.Errorf("could not find endpoint %s: %v", eid, errors) } func (n *network) getEndpointsFromStore() ([]*endpoint, error) { var epl []*endpoint tmp := endpoint{network: n} for _, store := range n.getController().getStores() { kvol, err := store.List(datastore.Key(tmp.KeyPrefix()...), &endpoint{network: n}) // Continue searching in the next store if no keys found in this store if err != nil { if err != datastore.ErrKeyNotFound { logrus.Debugf("failed to get endpoints for network %s scope %s: %v", n.Name(), store.Scope(), err) } continue } for _, kvo := range kvol { ep := kvo.(*endpoint) epl = append(epl, ep) } } return epl, nil } func (c *controller) updateToStore(kvObject datastore.KVObject) error { cs := c.getStore(kvObject.DataScope()) if cs == nil { return ErrDataStoreNotInitialized(kvObject.DataScope()) } if err := cs.PutObjectAtomic(kvObject); err != nil { if err == datastore.ErrKeyModified { return err } return fmt.Errorf("failed to update store for object type %T: %v", kvObject, err) } return nil } func (c *controller) deleteFromStore(kvObject datastore.KVObject) error { cs := c.getStore(kvObject.DataScope()) if cs == nil { return ErrDataStoreNotInitialized(kvObject.DataScope()) } retry: if err := cs.DeleteObjectAtomic(kvObject); err != nil { if err == datastore.ErrKeyModified { if err := cs.GetObject(datastore.Key(kvObject.Key()...), kvObject); err != nil { return fmt.Errorf("could not update the kvobject to latest when trying to delete: %v", err) } logrus.Warnf("Error (%v) deleting object %v, retrying....", err, kvObject.Key()) goto retry } return err } return nil } type netWatch struct { localEps map[string]*endpoint remoteEps map[string]*endpoint stopCh chan struct{} } func (c *controller) getLocalEps(nw *netWatch) []*endpoint { c.Lock() defer c.Unlock() var epl []*endpoint for _, ep := range nw.localEps { epl = append(epl, ep) } return epl } func (c *controller) watchSvcRecord(ep *endpoint) { c.watchCh <- ep } func (c *controller) unWatchSvcRecord(ep *endpoint) { c.unWatchCh <- ep } func (c *controller) networkWatchLoop(nw *netWatch, ep *endpoint, ecCh <-chan datastore.KVObject) { for { select { case <-nw.stopCh: return case o := <-ecCh: ec := o.(*endpointCnt) epl, err := ec.n.getEndpointsFromStore() if err != nil { break } c.Lock() var addEp []*endpoint delEpMap := make(map[string]*endpoint) renameEpMap := make(map[string]bool) for k, v := range nw.remoteEps { delEpMap[k] = v } for _, lEp := range epl { if _, ok := nw.localEps[lEp.ID()]; ok { continue } if ep, ok := nw.remoteEps[lEp.ID()]; ok { // On a container rename EP ID will remain // the same but the name will change. service // records should reflect the change. // Keep old EP entry in the delEpMap and add // EP from the store (which has the new name) // into the new list if lEp.name == ep.name { delete(delEpMap, lEp.ID()) continue } renameEpMap[lEp.ID()] = true } nw.remoteEps[lEp.ID()] = lEp addEp = append(addEp, lEp) } // EPs whose name are to be deleted from the svc records // should also be removed from nw's remote EP list, except // the ones that are getting renamed. for _, lEp := range delEpMap { if !renameEpMap[lEp.ID()] { delete(nw.remoteEps, lEp.ID()) } } c.Unlock() for _, lEp := range delEpMap { ep.getNetwork().updateSvcRecord(lEp, c.getLocalEps(nw), false) } for _, lEp := range addEp { ep.getNetwork().updateSvcRecord(lEp, c.getLocalEps(nw), true) } } } } func (c *controller) processEndpointCreate(nmap map[string]*netWatch, ep *endpoint) { n := ep.getNetwork() if !c.isDistributedControl() && n.Scope() == datastore.SwarmScope && n.driverIsMultihost() { return } c.Lock() nw, ok := nmap[n.ID()] c.Unlock() if ok { // Update the svc db for the local endpoint join right away n.updateSvcRecord(ep, c.getLocalEps(nw), true) c.Lock() nw.localEps[ep.ID()] = ep // If we had learned that from the kv store remove it // from remote ep list now that we know that this is // indeed a local endpoint delete(nw.remoteEps, ep.ID()) c.Unlock() return } nw = &netWatch{ localEps: make(map[string]*endpoint), remoteEps: make(map[string]*endpoint), } // Update the svc db for the local endpoint join right away // Do this before adding this ep to localEps so that we don't // try to update this ep's container's svc records n.updateSvcRecord(ep, c.getLocalEps(nw), true) c.Lock() nw.localEps[ep.ID()] = ep nmap[n.ID()] = nw nw.stopCh = make(chan struct{}) c.Unlock() store := c.getStore(n.DataScope()) if store == nil { return } if !store.Watchable() { return } ch, err := store.Watch(n.getEpCnt(), nw.stopCh) if err != nil { logrus.Warnf("Error creating watch for network: %v", err) return } go c.networkWatchLoop(nw, ep, ch) } func (c *controller) processEndpointDelete(nmap map[string]*netWatch, ep *endpoint) { n := ep.getNetwork() if !c.isDistributedControl() && n.Scope() == datastore.SwarmScope && n.driverIsMultihost() { return } c.Lock() nw, ok := nmap[n.ID()] if ok { delete(nw.localEps, ep.ID()) c.Unlock() // Update the svc db about local endpoint leave right away // Do this after we remove this ep from localEps so that we // don't try to remove this svc record from this ep's container. n.updateSvcRecord(ep, c.getLocalEps(nw), false) c.Lock() if len(nw.localEps) == 0 { close(nw.stopCh) // This is the last container going away for the network. Destroy // this network's svc db entry delete(c.svcRecords, n.ID()) delete(nmap, n.ID()) } } c.Unlock() } func (c *controller) watchLoop() { for { select { case ep := <-c.watchCh: c.processEndpointCreate(c.nmap, ep) case ep := <-c.unWatchCh: c.processEndpointDelete(c.nmap, ep) } } } func (c *controller) startWatch() { if c.watchCh != nil { return } c.watchCh = make(chan *endpoint) c.unWatchCh = make(chan *endpoint) c.nmap = make(map[string]*netWatch) go c.watchLoop() } func (c *controller) networkCleanup() { for _, n := range c.getNetworksFromStore() { if n.inDelete { logrus.Infof("Removing stale network %s (%s)", n.Name(), n.ID()) if err := n.delete(true, true); err != nil { logrus.Debugf("Error while removing stale network: %v", err) } } } } var populateSpecial NetworkWalker = func(nw Network) bool { if n := nw.(*network); n.hasSpecialDriver() && !n.ConfigOnly() { if err := n.getController().addNetwork(n); err != nil { logrus.Warnf("Failed to populate network %q with driver %q", nw.Name(), nw.Type()) } } return false }
package libnetwork import ( "fmt" "strings" "github.com/docker/docker/libnetwork/datastore" "github.com/docker/libkv/store/boltdb" "github.com/docker/libkv/store/consul" "github.com/docker/libkv/store/etcd" "github.com/docker/libkv/store/zookeeper" "github.com/sirupsen/logrus" ) func registerKVStores() { consul.Register() zookeeper.Register() etcd.Register() boltdb.Register() } func (c *controller) initScopedStore(scope string, scfg *datastore.ScopeCfg) error { store, err := datastore.NewDataStore(scope, scfg) if err != nil { return err } c.Lock() c.stores = append(c.stores, store) c.Unlock() return nil } func (c *controller) initStores() error { registerKVStores() c.Lock() if c.cfg == nil { c.Unlock() return nil } scopeConfigs := c.cfg.Scopes c.stores = nil c.Unlock() for scope, scfg := range scopeConfigs { if err := c.initScopedStore(scope, scfg); err != nil { return err } } c.startWatch() return nil } func (c *controller) closeStores() { for _, store := range c.getStores() { store.Close() } } func (c *controller) getStore(scope string) datastore.DataStore { c.Lock() defer c.Unlock() for _, store := range c.stores { if store.Scope() == scope { return store } } return nil } func (c *controller) getStores() []datastore.DataStore { c.Lock() defer c.Unlock() return c.stores } func (c *controller) getNetworkFromStore(nid string) (*network, error) { for _, n := range c.getNetworksFromStore() { if n.id == nid { return n, nil } } return nil, ErrNoSuchNetwork(nid) } func (c *controller) getNetworksForScope(scope string) ([]*network, error) { var nl []*network store := c.getStore(scope) if store == nil { return nil, nil } kvol, err := store.List(datastore.Key(datastore.NetworkKeyPrefix), &network{ctrlr: c}) if err != nil && err != datastore.ErrKeyNotFound { return nil, fmt.Errorf("failed to get networks for scope %s: %v", scope, err) } for _, kvo := range kvol { n := kvo.(*network) n.ctrlr = c ec := &endpointCnt{n: n} err = store.GetObject(datastore.Key(ec.Key()...), ec) if err != nil && !n.inDelete { logrus.Warnf("Could not find endpoint count key %s for network %s while listing: %v", datastore.Key(ec.Key()...), n.Name(), err) continue } n.epCnt = ec if n.scope == "" { n.scope = scope } nl = append(nl, n) } return nl, nil } func (c *controller) getNetworksFromStore() []*network { var nl []*network for _, store := range c.getStores() { kvol, err := store.List(datastore.Key(datastore.NetworkKeyPrefix), &network{ctrlr: c}) // Continue searching in the next store if no keys found in this store if err != nil { if err != datastore.ErrKeyNotFound { logrus.Debugf("failed to get networks for scope %s: %v", store.Scope(), err) } continue } kvep, err := store.Map(datastore.Key(epCntKeyPrefix), &endpointCnt{}) if err != nil && err != datastore.ErrKeyNotFound { logrus.Warnf("failed to get endpoint_count map for scope %s: %v", store.Scope(), err) } for _, kvo := range kvol { n := kvo.(*network) n.Lock() n.ctrlr = c ec := &endpointCnt{n: n} // Trim the leading & trailing "/" to make it consistent across all stores if val, ok := kvep[strings.Trim(datastore.Key(ec.Key()...), "/")]; ok { ec = val.(*endpointCnt) ec.n = n n.epCnt = ec } if n.scope == "" { n.scope = store.Scope() } n.Unlock() nl = append(nl, n) } } return nl } func (n *network) getEndpointFromStore(eid string) (*endpoint, error) { var errors []string for _, store := range n.ctrlr.getStores() { ep := &endpoint{id: eid, network: n} err := store.GetObject(datastore.Key(ep.Key()...), ep) // Continue searching in the next store if the key is not found in this store if err != nil { if err != datastore.ErrKeyNotFound { errors = append(errors, fmt.Sprintf("{%s:%v}, ", store.Scope(), err)) logrus.Debugf("could not find endpoint %s in %s: %v", eid, store.Scope(), err) } continue } return ep, nil } return nil, fmt.Errorf("could not find endpoint %s: %v", eid, errors) } func (n *network) getEndpointsFromStore() ([]*endpoint, error) { var epl []*endpoint tmp := endpoint{network: n} for _, store := range n.getController().getStores() { kvol, err := store.List(datastore.Key(tmp.KeyPrefix()...), &endpoint{network: n}) // Continue searching in the next store if no keys found in this store if err != nil { if err != datastore.ErrKeyNotFound { logrus.Debugf("failed to get endpoints for network %s scope %s: %v", n.Name(), store.Scope(), err) } continue } for _, kvo := range kvol { ep := kvo.(*endpoint) epl = append(epl, ep) } } return epl, nil } func (c *controller) updateToStore(kvObject datastore.KVObject) error { cs := c.getStore(kvObject.DataScope()) if cs == nil { return ErrDataStoreNotInitialized(kvObject.DataScope()) } if err := cs.PutObjectAtomic(kvObject); err != nil { if err == datastore.ErrKeyModified { return err } return fmt.Errorf("failed to update store for object type %T: %v", kvObject, err) } return nil } func (c *controller) deleteFromStore(kvObject datastore.KVObject) error { cs := c.getStore(kvObject.DataScope()) if cs == nil { return ErrDataStoreNotInitialized(kvObject.DataScope()) } retry: if err := cs.DeleteObjectAtomic(kvObject); err != nil { if err == datastore.ErrKeyModified { if err := cs.GetObject(datastore.Key(kvObject.Key()...), kvObject); err != nil { return fmt.Errorf("could not update the kvobject to latest when trying to delete: %v", err) } logrus.Warnf("Error (%v) deleting object %v, retrying....", err, kvObject.Key()) goto retry } return err } return nil } type netWatch struct { localEps map[string]*endpoint remoteEps map[string]*endpoint stopCh chan struct{} } func (c *controller) getLocalEps(nw *netWatch) []*endpoint { c.Lock() defer c.Unlock() var epl []*endpoint for _, ep := range nw.localEps { epl = append(epl, ep) } return epl } func (c *controller) watchSvcRecord(ep *endpoint) { c.watchCh <- ep } func (c *controller) unWatchSvcRecord(ep *endpoint) { c.unWatchCh <- ep } func (c *controller) networkWatchLoop(nw *netWatch, ep *endpoint, ecCh <-chan datastore.KVObject) { for { select { case <-nw.stopCh: return case o := <-ecCh: ec := o.(*endpointCnt) epl, err := ec.n.getEndpointsFromStore() if err != nil { break } c.Lock() var addEp []*endpoint delEpMap := make(map[string]*endpoint) renameEpMap := make(map[string]bool) for k, v := range nw.remoteEps { delEpMap[k] = v } for _, lEp := range epl { if _, ok := nw.localEps[lEp.ID()]; ok { continue } if ep, ok := nw.remoteEps[lEp.ID()]; ok { // On a container rename EP ID will remain // the same but the name will change. service // records should reflect the change. // Keep old EP entry in the delEpMap and add // EP from the store (which has the new name) // into the new list if lEp.name == ep.name { delete(delEpMap, lEp.ID()) continue } renameEpMap[lEp.ID()] = true } nw.remoteEps[lEp.ID()] = lEp addEp = append(addEp, lEp) } // EPs whose name are to be deleted from the svc records // should also be removed from nw's remote EP list, except // the ones that are getting renamed. for _, lEp := range delEpMap { if !renameEpMap[lEp.ID()] { delete(nw.remoteEps, lEp.ID()) } } c.Unlock() for _, lEp := range delEpMap { ep.getNetwork().updateSvcRecord(lEp, c.getLocalEps(nw), false) } for _, lEp := range addEp { ep.getNetwork().updateSvcRecord(lEp, c.getLocalEps(nw), true) } } } } func (c *controller) processEndpointCreate(nmap map[string]*netWatch, ep *endpoint) { n := ep.getNetwork() if !c.isDistributedControl() && n.Scope() == datastore.SwarmScope && n.driverIsMultihost() { return } networkID := n.ID() endpointID := ep.ID() c.Lock() nw, ok := nmap[networkID] c.Unlock() if ok { // Update the svc db for the local endpoint join right away n.updateSvcRecord(ep, c.getLocalEps(nw), true) c.Lock() nw.localEps[endpointID] = ep // If we had learned that from the kv store remove it // from remote ep list now that we know that this is // indeed a local endpoint delete(nw.remoteEps, endpointID) c.Unlock() return } nw = &netWatch{ localEps: make(map[string]*endpoint), remoteEps: make(map[string]*endpoint), } // Update the svc db for the local endpoint join right away // Do this before adding this ep to localEps so that we don't // try to update this ep's container's svc records n.updateSvcRecord(ep, c.getLocalEps(nw), true) c.Lock() nw.localEps[endpointID] = ep nmap[networkID] = nw nw.stopCh = make(chan struct{}) c.Unlock() store := c.getStore(n.DataScope()) if store == nil { return } if !store.Watchable() { return } ch, err := store.Watch(n.getEpCnt(), nw.stopCh) if err != nil { logrus.Warnf("Error creating watch for network: %v", err) return } go c.networkWatchLoop(nw, ep, ch) } func (c *controller) processEndpointDelete(nmap map[string]*netWatch, ep *endpoint) { n := ep.getNetwork() if !c.isDistributedControl() && n.Scope() == datastore.SwarmScope && n.driverIsMultihost() { return } c.Lock() nw, ok := nmap[n.ID()] if ok { delete(nw.localEps, ep.ID()) c.Unlock() // Update the svc db about local endpoint leave right away // Do this after we remove this ep from localEps so that we // don't try to remove this svc record from this ep's container. n.updateSvcRecord(ep, c.getLocalEps(nw), false) c.Lock() if len(nw.localEps) == 0 { close(nw.stopCh) // This is the last container going away for the network. Destroy // this network's svc db entry delete(c.svcRecords, n.ID()) delete(nmap, n.ID()) } } c.Unlock() } func (c *controller) watchLoop() { for { select { case ep := <-c.watchCh: c.processEndpointCreate(c.nmap, ep) case ep := <-c.unWatchCh: c.processEndpointDelete(c.nmap, ep) } } } func (c *controller) startWatch() { if c.watchCh != nil { return } c.watchCh = make(chan *endpoint) c.unWatchCh = make(chan *endpoint) c.nmap = make(map[string]*netWatch) go c.watchLoop() } func (c *controller) networkCleanup() { for _, n := range c.getNetworksFromStore() { if n.inDelete { logrus.Infof("Removing stale network %s (%s)", n.Name(), n.ID()) if err := n.delete(true, true); err != nil { logrus.Debugf("Error while removing stale network: %v", err) } } } } var populateSpecial NetworkWalker = func(nw Network) bool { if n := nw.(*network); n.hasSpecialDriver() && !n.ConfigOnly() { if err := n.getController().addNetwork(n); err != nil { logrus.Warnf("Failed to populate network %q with driver %q", nw.Name(), nw.Type()) } } return false }
steffengy
d12fc17073431ebe74ff0b1b3e5a739301c1760a
2a562b15833fe87adf03405edec6bd7235945f7f
From the linter: > libnetwork/store.go:343:2: var `networkId` should be `networkID` (golint)
samuelkarp
4,593
moby/moby
42,569
libnetwork: processEndpointCreate: Fix deadlock between getSvcRecords and processEndpointCreate
Another similar case occured with a fixed build. Removed holding multiple locks for processEndpointCreate aswell. In terms of lock hierarchy I also thought about locking `n.ctrlr.Lock()` before `n.Lock()` in `getSvcRecords()` to possibly prevent other cases where the locking can't be split like here (e.g. both looks need to be held). But that might be not wanted because of data races (accessing networks data before its locked). If the controller of a network cant change, it probably would be prefered though. References https://github.com/moby/moby/pull/42545 @thaJeztah
null
2021-06-28 07:03:11+00:00
2021-06-30 09:09:09+00:00
libnetwork/store.go
package libnetwork import ( "fmt" "strings" "github.com/docker/docker/libnetwork/datastore" "github.com/docker/libkv/store/boltdb" "github.com/docker/libkv/store/consul" "github.com/docker/libkv/store/etcd" "github.com/docker/libkv/store/zookeeper" "github.com/sirupsen/logrus" ) func registerKVStores() { consul.Register() zookeeper.Register() etcd.Register() boltdb.Register() } func (c *controller) initScopedStore(scope string, scfg *datastore.ScopeCfg) error { store, err := datastore.NewDataStore(scope, scfg) if err != nil { return err } c.Lock() c.stores = append(c.stores, store) c.Unlock() return nil } func (c *controller) initStores() error { registerKVStores() c.Lock() if c.cfg == nil { c.Unlock() return nil } scopeConfigs := c.cfg.Scopes c.stores = nil c.Unlock() for scope, scfg := range scopeConfigs { if err := c.initScopedStore(scope, scfg); err != nil { return err } } c.startWatch() return nil } func (c *controller) closeStores() { for _, store := range c.getStores() { store.Close() } } func (c *controller) getStore(scope string) datastore.DataStore { c.Lock() defer c.Unlock() for _, store := range c.stores { if store.Scope() == scope { return store } } return nil } func (c *controller) getStores() []datastore.DataStore { c.Lock() defer c.Unlock() return c.stores } func (c *controller) getNetworkFromStore(nid string) (*network, error) { for _, n := range c.getNetworksFromStore() { if n.id == nid { return n, nil } } return nil, ErrNoSuchNetwork(nid) } func (c *controller) getNetworksForScope(scope string) ([]*network, error) { var nl []*network store := c.getStore(scope) if store == nil { return nil, nil } kvol, err := store.List(datastore.Key(datastore.NetworkKeyPrefix), &network{ctrlr: c}) if err != nil && err != datastore.ErrKeyNotFound { return nil, fmt.Errorf("failed to get networks for scope %s: %v", scope, err) } for _, kvo := range kvol { n := kvo.(*network) n.ctrlr = c ec := &endpointCnt{n: n} err = store.GetObject(datastore.Key(ec.Key()...), ec) if err != nil && !n.inDelete { logrus.Warnf("Could not find endpoint count key %s for network %s while listing: %v", datastore.Key(ec.Key()...), n.Name(), err) continue } n.epCnt = ec if n.scope == "" { n.scope = scope } nl = append(nl, n) } return nl, nil } func (c *controller) getNetworksFromStore() []*network { var nl []*network for _, store := range c.getStores() { kvol, err := store.List(datastore.Key(datastore.NetworkKeyPrefix), &network{ctrlr: c}) // Continue searching in the next store if no keys found in this store if err != nil { if err != datastore.ErrKeyNotFound { logrus.Debugf("failed to get networks for scope %s: %v", store.Scope(), err) } continue } kvep, err := store.Map(datastore.Key(epCntKeyPrefix), &endpointCnt{}) if err != nil && err != datastore.ErrKeyNotFound { logrus.Warnf("failed to get endpoint_count map for scope %s: %v", store.Scope(), err) } for _, kvo := range kvol { n := kvo.(*network) n.Lock() n.ctrlr = c ec := &endpointCnt{n: n} // Trim the leading & trailing "/" to make it consistent across all stores if val, ok := kvep[strings.Trim(datastore.Key(ec.Key()...), "/")]; ok { ec = val.(*endpointCnt) ec.n = n n.epCnt = ec } if n.scope == "" { n.scope = store.Scope() } n.Unlock() nl = append(nl, n) } } return nl } func (n *network) getEndpointFromStore(eid string) (*endpoint, error) { var errors []string for _, store := range n.ctrlr.getStores() { ep := &endpoint{id: eid, network: n} err := store.GetObject(datastore.Key(ep.Key()...), ep) // Continue searching in the next store if the key is not found in this store if err != nil { if err != datastore.ErrKeyNotFound { errors = append(errors, fmt.Sprintf("{%s:%v}, ", store.Scope(), err)) logrus.Debugf("could not find endpoint %s in %s: %v", eid, store.Scope(), err) } continue } return ep, nil } return nil, fmt.Errorf("could not find endpoint %s: %v", eid, errors) } func (n *network) getEndpointsFromStore() ([]*endpoint, error) { var epl []*endpoint tmp := endpoint{network: n} for _, store := range n.getController().getStores() { kvol, err := store.List(datastore.Key(tmp.KeyPrefix()...), &endpoint{network: n}) // Continue searching in the next store if no keys found in this store if err != nil { if err != datastore.ErrKeyNotFound { logrus.Debugf("failed to get endpoints for network %s scope %s: %v", n.Name(), store.Scope(), err) } continue } for _, kvo := range kvol { ep := kvo.(*endpoint) epl = append(epl, ep) } } return epl, nil } func (c *controller) updateToStore(kvObject datastore.KVObject) error { cs := c.getStore(kvObject.DataScope()) if cs == nil { return ErrDataStoreNotInitialized(kvObject.DataScope()) } if err := cs.PutObjectAtomic(kvObject); err != nil { if err == datastore.ErrKeyModified { return err } return fmt.Errorf("failed to update store for object type %T: %v", kvObject, err) } return nil } func (c *controller) deleteFromStore(kvObject datastore.KVObject) error { cs := c.getStore(kvObject.DataScope()) if cs == nil { return ErrDataStoreNotInitialized(kvObject.DataScope()) } retry: if err := cs.DeleteObjectAtomic(kvObject); err != nil { if err == datastore.ErrKeyModified { if err := cs.GetObject(datastore.Key(kvObject.Key()...), kvObject); err != nil { return fmt.Errorf("could not update the kvobject to latest when trying to delete: %v", err) } logrus.Warnf("Error (%v) deleting object %v, retrying....", err, kvObject.Key()) goto retry } return err } return nil } type netWatch struct { localEps map[string]*endpoint remoteEps map[string]*endpoint stopCh chan struct{} } func (c *controller) getLocalEps(nw *netWatch) []*endpoint { c.Lock() defer c.Unlock() var epl []*endpoint for _, ep := range nw.localEps { epl = append(epl, ep) } return epl } func (c *controller) watchSvcRecord(ep *endpoint) { c.watchCh <- ep } func (c *controller) unWatchSvcRecord(ep *endpoint) { c.unWatchCh <- ep } func (c *controller) networkWatchLoop(nw *netWatch, ep *endpoint, ecCh <-chan datastore.KVObject) { for { select { case <-nw.stopCh: return case o := <-ecCh: ec := o.(*endpointCnt) epl, err := ec.n.getEndpointsFromStore() if err != nil { break } c.Lock() var addEp []*endpoint delEpMap := make(map[string]*endpoint) renameEpMap := make(map[string]bool) for k, v := range nw.remoteEps { delEpMap[k] = v } for _, lEp := range epl { if _, ok := nw.localEps[lEp.ID()]; ok { continue } if ep, ok := nw.remoteEps[lEp.ID()]; ok { // On a container rename EP ID will remain // the same but the name will change. service // records should reflect the change. // Keep old EP entry in the delEpMap and add // EP from the store (which has the new name) // into the new list if lEp.name == ep.name { delete(delEpMap, lEp.ID()) continue } renameEpMap[lEp.ID()] = true } nw.remoteEps[lEp.ID()] = lEp addEp = append(addEp, lEp) } // EPs whose name are to be deleted from the svc records // should also be removed from nw's remote EP list, except // the ones that are getting renamed. for _, lEp := range delEpMap { if !renameEpMap[lEp.ID()] { delete(nw.remoteEps, lEp.ID()) } } c.Unlock() for _, lEp := range delEpMap { ep.getNetwork().updateSvcRecord(lEp, c.getLocalEps(nw), false) } for _, lEp := range addEp { ep.getNetwork().updateSvcRecord(lEp, c.getLocalEps(nw), true) } } } } func (c *controller) processEndpointCreate(nmap map[string]*netWatch, ep *endpoint) { n := ep.getNetwork() if !c.isDistributedControl() && n.Scope() == datastore.SwarmScope && n.driverIsMultihost() { return } c.Lock() nw, ok := nmap[n.ID()] c.Unlock() if ok { // Update the svc db for the local endpoint join right away n.updateSvcRecord(ep, c.getLocalEps(nw), true) c.Lock() nw.localEps[ep.ID()] = ep // If we had learned that from the kv store remove it // from remote ep list now that we know that this is // indeed a local endpoint delete(nw.remoteEps, ep.ID()) c.Unlock() return } nw = &netWatch{ localEps: make(map[string]*endpoint), remoteEps: make(map[string]*endpoint), } // Update the svc db for the local endpoint join right away // Do this before adding this ep to localEps so that we don't // try to update this ep's container's svc records n.updateSvcRecord(ep, c.getLocalEps(nw), true) c.Lock() nw.localEps[ep.ID()] = ep nmap[n.ID()] = nw nw.stopCh = make(chan struct{}) c.Unlock() store := c.getStore(n.DataScope()) if store == nil { return } if !store.Watchable() { return } ch, err := store.Watch(n.getEpCnt(), nw.stopCh) if err != nil { logrus.Warnf("Error creating watch for network: %v", err) return } go c.networkWatchLoop(nw, ep, ch) } func (c *controller) processEndpointDelete(nmap map[string]*netWatch, ep *endpoint) { n := ep.getNetwork() if !c.isDistributedControl() && n.Scope() == datastore.SwarmScope && n.driverIsMultihost() { return } c.Lock() nw, ok := nmap[n.ID()] if ok { delete(nw.localEps, ep.ID()) c.Unlock() // Update the svc db about local endpoint leave right away // Do this after we remove this ep from localEps so that we // don't try to remove this svc record from this ep's container. n.updateSvcRecord(ep, c.getLocalEps(nw), false) c.Lock() if len(nw.localEps) == 0 { close(nw.stopCh) // This is the last container going away for the network. Destroy // this network's svc db entry delete(c.svcRecords, n.ID()) delete(nmap, n.ID()) } } c.Unlock() } func (c *controller) watchLoop() { for { select { case ep := <-c.watchCh: c.processEndpointCreate(c.nmap, ep) case ep := <-c.unWatchCh: c.processEndpointDelete(c.nmap, ep) } } } func (c *controller) startWatch() { if c.watchCh != nil { return } c.watchCh = make(chan *endpoint) c.unWatchCh = make(chan *endpoint) c.nmap = make(map[string]*netWatch) go c.watchLoop() } func (c *controller) networkCleanup() { for _, n := range c.getNetworksFromStore() { if n.inDelete { logrus.Infof("Removing stale network %s (%s)", n.Name(), n.ID()) if err := n.delete(true, true); err != nil { logrus.Debugf("Error while removing stale network: %v", err) } } } } var populateSpecial NetworkWalker = func(nw Network) bool { if n := nw.(*network); n.hasSpecialDriver() && !n.ConfigOnly() { if err := n.getController().addNetwork(n); err != nil { logrus.Warnf("Failed to populate network %q with driver %q", nw.Name(), nw.Type()) } } return false }
package libnetwork import ( "fmt" "strings" "github.com/docker/docker/libnetwork/datastore" "github.com/docker/libkv/store/boltdb" "github.com/docker/libkv/store/consul" "github.com/docker/libkv/store/etcd" "github.com/docker/libkv/store/zookeeper" "github.com/sirupsen/logrus" ) func registerKVStores() { consul.Register() zookeeper.Register() etcd.Register() boltdb.Register() } func (c *controller) initScopedStore(scope string, scfg *datastore.ScopeCfg) error { store, err := datastore.NewDataStore(scope, scfg) if err != nil { return err } c.Lock() c.stores = append(c.stores, store) c.Unlock() return nil } func (c *controller) initStores() error { registerKVStores() c.Lock() if c.cfg == nil { c.Unlock() return nil } scopeConfigs := c.cfg.Scopes c.stores = nil c.Unlock() for scope, scfg := range scopeConfigs { if err := c.initScopedStore(scope, scfg); err != nil { return err } } c.startWatch() return nil } func (c *controller) closeStores() { for _, store := range c.getStores() { store.Close() } } func (c *controller) getStore(scope string) datastore.DataStore { c.Lock() defer c.Unlock() for _, store := range c.stores { if store.Scope() == scope { return store } } return nil } func (c *controller) getStores() []datastore.DataStore { c.Lock() defer c.Unlock() return c.stores } func (c *controller) getNetworkFromStore(nid string) (*network, error) { for _, n := range c.getNetworksFromStore() { if n.id == nid { return n, nil } } return nil, ErrNoSuchNetwork(nid) } func (c *controller) getNetworksForScope(scope string) ([]*network, error) { var nl []*network store := c.getStore(scope) if store == nil { return nil, nil } kvol, err := store.List(datastore.Key(datastore.NetworkKeyPrefix), &network{ctrlr: c}) if err != nil && err != datastore.ErrKeyNotFound { return nil, fmt.Errorf("failed to get networks for scope %s: %v", scope, err) } for _, kvo := range kvol { n := kvo.(*network) n.ctrlr = c ec := &endpointCnt{n: n} err = store.GetObject(datastore.Key(ec.Key()...), ec) if err != nil && !n.inDelete { logrus.Warnf("Could not find endpoint count key %s for network %s while listing: %v", datastore.Key(ec.Key()...), n.Name(), err) continue } n.epCnt = ec if n.scope == "" { n.scope = scope } nl = append(nl, n) } return nl, nil } func (c *controller) getNetworksFromStore() []*network { var nl []*network for _, store := range c.getStores() { kvol, err := store.List(datastore.Key(datastore.NetworkKeyPrefix), &network{ctrlr: c}) // Continue searching in the next store if no keys found in this store if err != nil { if err != datastore.ErrKeyNotFound { logrus.Debugf("failed to get networks for scope %s: %v", store.Scope(), err) } continue } kvep, err := store.Map(datastore.Key(epCntKeyPrefix), &endpointCnt{}) if err != nil && err != datastore.ErrKeyNotFound { logrus.Warnf("failed to get endpoint_count map for scope %s: %v", store.Scope(), err) } for _, kvo := range kvol { n := kvo.(*network) n.Lock() n.ctrlr = c ec := &endpointCnt{n: n} // Trim the leading & trailing "/" to make it consistent across all stores if val, ok := kvep[strings.Trim(datastore.Key(ec.Key()...), "/")]; ok { ec = val.(*endpointCnt) ec.n = n n.epCnt = ec } if n.scope == "" { n.scope = store.Scope() } n.Unlock() nl = append(nl, n) } } return nl } func (n *network) getEndpointFromStore(eid string) (*endpoint, error) { var errors []string for _, store := range n.ctrlr.getStores() { ep := &endpoint{id: eid, network: n} err := store.GetObject(datastore.Key(ep.Key()...), ep) // Continue searching in the next store if the key is not found in this store if err != nil { if err != datastore.ErrKeyNotFound { errors = append(errors, fmt.Sprintf("{%s:%v}, ", store.Scope(), err)) logrus.Debugf("could not find endpoint %s in %s: %v", eid, store.Scope(), err) } continue } return ep, nil } return nil, fmt.Errorf("could not find endpoint %s: %v", eid, errors) } func (n *network) getEndpointsFromStore() ([]*endpoint, error) { var epl []*endpoint tmp := endpoint{network: n} for _, store := range n.getController().getStores() { kvol, err := store.List(datastore.Key(tmp.KeyPrefix()...), &endpoint{network: n}) // Continue searching in the next store if no keys found in this store if err != nil { if err != datastore.ErrKeyNotFound { logrus.Debugf("failed to get endpoints for network %s scope %s: %v", n.Name(), store.Scope(), err) } continue } for _, kvo := range kvol { ep := kvo.(*endpoint) epl = append(epl, ep) } } return epl, nil } func (c *controller) updateToStore(kvObject datastore.KVObject) error { cs := c.getStore(kvObject.DataScope()) if cs == nil { return ErrDataStoreNotInitialized(kvObject.DataScope()) } if err := cs.PutObjectAtomic(kvObject); err != nil { if err == datastore.ErrKeyModified { return err } return fmt.Errorf("failed to update store for object type %T: %v", kvObject, err) } return nil } func (c *controller) deleteFromStore(kvObject datastore.KVObject) error { cs := c.getStore(kvObject.DataScope()) if cs == nil { return ErrDataStoreNotInitialized(kvObject.DataScope()) } retry: if err := cs.DeleteObjectAtomic(kvObject); err != nil { if err == datastore.ErrKeyModified { if err := cs.GetObject(datastore.Key(kvObject.Key()...), kvObject); err != nil { return fmt.Errorf("could not update the kvobject to latest when trying to delete: %v", err) } logrus.Warnf("Error (%v) deleting object %v, retrying....", err, kvObject.Key()) goto retry } return err } return nil } type netWatch struct { localEps map[string]*endpoint remoteEps map[string]*endpoint stopCh chan struct{} } func (c *controller) getLocalEps(nw *netWatch) []*endpoint { c.Lock() defer c.Unlock() var epl []*endpoint for _, ep := range nw.localEps { epl = append(epl, ep) } return epl } func (c *controller) watchSvcRecord(ep *endpoint) { c.watchCh <- ep } func (c *controller) unWatchSvcRecord(ep *endpoint) { c.unWatchCh <- ep } func (c *controller) networkWatchLoop(nw *netWatch, ep *endpoint, ecCh <-chan datastore.KVObject) { for { select { case <-nw.stopCh: return case o := <-ecCh: ec := o.(*endpointCnt) epl, err := ec.n.getEndpointsFromStore() if err != nil { break } c.Lock() var addEp []*endpoint delEpMap := make(map[string]*endpoint) renameEpMap := make(map[string]bool) for k, v := range nw.remoteEps { delEpMap[k] = v } for _, lEp := range epl { if _, ok := nw.localEps[lEp.ID()]; ok { continue } if ep, ok := nw.remoteEps[lEp.ID()]; ok { // On a container rename EP ID will remain // the same but the name will change. service // records should reflect the change. // Keep old EP entry in the delEpMap and add // EP from the store (which has the new name) // into the new list if lEp.name == ep.name { delete(delEpMap, lEp.ID()) continue } renameEpMap[lEp.ID()] = true } nw.remoteEps[lEp.ID()] = lEp addEp = append(addEp, lEp) } // EPs whose name are to be deleted from the svc records // should also be removed from nw's remote EP list, except // the ones that are getting renamed. for _, lEp := range delEpMap { if !renameEpMap[lEp.ID()] { delete(nw.remoteEps, lEp.ID()) } } c.Unlock() for _, lEp := range delEpMap { ep.getNetwork().updateSvcRecord(lEp, c.getLocalEps(nw), false) } for _, lEp := range addEp { ep.getNetwork().updateSvcRecord(lEp, c.getLocalEps(nw), true) } } } } func (c *controller) processEndpointCreate(nmap map[string]*netWatch, ep *endpoint) { n := ep.getNetwork() if !c.isDistributedControl() && n.Scope() == datastore.SwarmScope && n.driverIsMultihost() { return } networkID := n.ID() endpointID := ep.ID() c.Lock() nw, ok := nmap[networkID] c.Unlock() if ok { // Update the svc db for the local endpoint join right away n.updateSvcRecord(ep, c.getLocalEps(nw), true) c.Lock() nw.localEps[endpointID] = ep // If we had learned that from the kv store remove it // from remote ep list now that we know that this is // indeed a local endpoint delete(nw.remoteEps, endpointID) c.Unlock() return } nw = &netWatch{ localEps: make(map[string]*endpoint), remoteEps: make(map[string]*endpoint), } // Update the svc db for the local endpoint join right away // Do this before adding this ep to localEps so that we don't // try to update this ep's container's svc records n.updateSvcRecord(ep, c.getLocalEps(nw), true) c.Lock() nw.localEps[endpointID] = ep nmap[networkID] = nw nw.stopCh = make(chan struct{}) c.Unlock() store := c.getStore(n.DataScope()) if store == nil { return } if !store.Watchable() { return } ch, err := store.Watch(n.getEpCnt(), nw.stopCh) if err != nil { logrus.Warnf("Error creating watch for network: %v", err) return } go c.networkWatchLoop(nw, ep, ch) } func (c *controller) processEndpointDelete(nmap map[string]*netWatch, ep *endpoint) { n := ep.getNetwork() if !c.isDistributedControl() && n.Scope() == datastore.SwarmScope && n.driverIsMultihost() { return } c.Lock() nw, ok := nmap[n.ID()] if ok { delete(nw.localEps, ep.ID()) c.Unlock() // Update the svc db about local endpoint leave right away // Do this after we remove this ep from localEps so that we // don't try to remove this svc record from this ep's container. n.updateSvcRecord(ep, c.getLocalEps(nw), false) c.Lock() if len(nw.localEps) == 0 { close(nw.stopCh) // This is the last container going away for the network. Destroy // this network's svc db entry delete(c.svcRecords, n.ID()) delete(nmap, n.ID()) } } c.Unlock() } func (c *controller) watchLoop() { for { select { case ep := <-c.watchCh: c.processEndpointCreate(c.nmap, ep) case ep := <-c.unWatchCh: c.processEndpointDelete(c.nmap, ep) } } } func (c *controller) startWatch() { if c.watchCh != nil { return } c.watchCh = make(chan *endpoint) c.unWatchCh = make(chan *endpoint) c.nmap = make(map[string]*netWatch) go c.watchLoop() } func (c *controller) networkCleanup() { for _, n := range c.getNetworksFromStore() { if n.inDelete { logrus.Infof("Removing stale network %s (%s)", n.Name(), n.ID()) if err := n.delete(true, true); err != nil { logrus.Debugf("Error while removing stale network: %v", err) } } } } var populateSpecial NetworkWalker = func(nw Network) bool { if n := nw.(*network); n.hasSpecialDriver() && !n.ConfigOnly() { if err := n.getController().addNetwork(n); err != nil { logrus.Warnf("Failed to populate network %q with driver %q", nw.Name(), nw.Type()) } } return false }
steffengy
d12fc17073431ebe74ff0b1b3e5a739301c1760a
2a562b15833fe87adf03405edec6bd7235945f7f
From the linter: > libnetwork/store.go:344:2: var `endpointId` should be `endpointID` (golint)
samuelkarp
4,594
moby/moby
42,559
Add `type` parameter to `/system/df`
**- What I did** Add `type` parameter to `/system/df` **- How I did it** Internally allow more fine-grained control over choice of objects to compute disk usage from, add `types` URL parameter. **- How to verify it** ```sh $ docker run alpine # or create a container in any other way you prefer $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' | jq # should see image and container data $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?type=container&type=invalid' | jq # should see an error: { "message": "unknown object type: invalid" } $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?type=container' | jq # should see only container data $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df' | jq # should see data, just like before ``` **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> `/system/df` endpoint now supports `type` URL parameter, which gives control over object types to compute disk usage of
null
2021-06-23 13:34:39+00:00
2021-08-02 19:03:06+00:00
api/server/router/system/backend.go
package system // import "github.com/docker/docker/api/server/router/system" import ( "context" "time" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/swarm" ) // Backend is the methods that need to be implemented to provide // system specific functionality. type Backend interface { SystemInfo() *types.Info SystemVersion() types.Version SystemDiskUsage(ctx context.Context) (*types.DiskUsage, error) SubscribeToEvents(since, until time.Time, ef filters.Args) ([]events.Message, chan interface{}) UnsubscribeFromEvents(chan interface{}) AuthenticateToRegistry(ctx context.Context, authConfig *types.AuthConfig) (string, string, error) } // ClusterBackend is all the methods that need to be implemented // to provide cluster system specific functionality. type ClusterBackend interface { Info() swarm.Info }
package system // import "github.com/docker/docker/api/server/router/system" import ( "context" "time" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/swarm" ) // DiskUsageOptions holds parameters for system disk usage query. type DiskUsageOptions struct { // Containers controls whether container disk usage should be computed. Containers bool // Images controls whether image disk usage should be computed. Images bool // Volumes controls whether volume disk usage should be computed. Volumes bool } // Backend is the methods that need to be implemented to provide // system specific functionality. type Backend interface { SystemInfo() *types.Info SystemVersion() types.Version SystemDiskUsage(ctx context.Context, opts DiskUsageOptions) (*types.DiskUsage, error) SubscribeToEvents(since, until time.Time, ef filters.Args) ([]events.Message, chan interface{}) UnsubscribeFromEvents(chan interface{}) AuthenticateToRegistry(ctx context.Context, authConfig *types.AuthConfig) (string, string, error) } // ClusterBackend is all the methods that need to be implemented // to provide cluster system specific functionality. type ClusterBackend interface { Info() swarm.Info }
rvolosatovs
bca9f608454916a63351cfea2689e18642860e79
656a5e2bdf8cc2be64fee1459821b7045b755b0d
So, I was looking if an alternative could be to use a `map[DiskUsageObject]bool` to indicate which objects to collect, then iterate over that list. I did some quick dabbling with that idea; let me push that to a branch, and we can chat Tomorrow
thaJeztah
4,595
moby/moby
42,559
Add `type` parameter to `/system/df`
**- What I did** Add `type` parameter to `/system/df` **- How I did it** Internally allow more fine-grained control over choice of objects to compute disk usage from, add `types` URL parameter. **- How to verify it** ```sh $ docker run alpine # or create a container in any other way you prefer $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' | jq # should see image and container data $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?type=container&type=invalid' | jq # should see an error: { "message": "unknown object type: invalid" } $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?type=container' | jq # should see only container data $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df' | jq # should see data, just like before ``` **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> `/system/df` endpoint now supports `type` URL parameter, which gives control over object types to compute disk usage of
null
2021-06-23 13:34:39+00:00
2021-08-02 19:03:06+00:00
api/server/router/system/system_routes.go
package system // import "github.com/docker/docker/api/server/router/system" import ( "context" "encoding/json" "fmt" "net/http" "time" "github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/router/build" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/registry" timetypes "github.com/docker/docker/api/types/time" "github.com/docker/docker/api/types/versions" "github.com/docker/docker/pkg/ioutils" pkgerrors "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/errgroup" ) func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.WriteHeader(http.StatusOK) return nil } func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate") w.Header().Add("Pragma", "no-cache") builderVersion := build.BuilderVersion(*s.features) if bv := builderVersion; bv != "" { w.Header().Set("Builder-Version", string(bv)) } if r.Method == http.MethodHead { w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Header().Set("Content-Length", "0") return nil } _, err := w.Write([]byte{'O', 'K'}) return err } func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemInfo() if s.cluster != nil { info.Swarm = s.cluster.Info() info.Warnings = append(info.Warnings, info.Swarm.Warnings...) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") { // TODO: handle this conversion in engine-api type oldInfo struct { *types.Info ExecutionDriver string } old := &oldInfo{ Info: info, ExecutionDriver: "<not supported>", } nameOnlySecurityOptions := []string{} kvSecOpts, err := types.DecodeSecurityOptions(old.SecurityOptions) if err != nil { return err } for _, s := range kvSecOpts { nameOnlySecurityOptions = append(nameOnlySecurityOptions, s.Name) } old.SecurityOptions = nameOnlySecurityOptions return httputils.WriteJSON(w, http.StatusOK, old) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") { if info.KernelVersion == "" { info.KernelVersion = "<unknown>" } if info.OperatingSystem == "" { info.OperatingSystem = "<unknown>" } } return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemVersion() return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { eg, ctx := errgroup.WithContext(ctx) var du *types.DiskUsage eg.Go(func() error { var err error du, err = s.backend.SystemDiskUsage(ctx) return err }) var buildCache []*types.BuildCache eg.Go(func() error { var err error buildCache, err = s.builder.DiskUsage(ctx) if err != nil { return pkgerrors.Wrap(err, "error getting build cache usage") } return nil }) if err := eg.Wait(); err != nil { return err } if versions.LessThan(httputils.VersionFromContext(ctx), "1.42") { var builderSize int64 for _, b := range buildCache { builderSize += b.Size } du.BuilderSize = builderSize } du.BuildCache = buildCache if buildCache == nil { // Ensure empty `BuildCache` field is represented as empty JSON array(`[]`) // instead of `null` to be consistent with `Images`, `Containers` etc. du.BuildCache = []*types.BuildCache{} } return httputils.WriteJSON(w, http.StatusOK, du) } type invalidRequestError struct { Err error } func (e invalidRequestError) Error() string { return e.Err.Error() } func (e invalidRequestError) InvalidParameter() {} func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } since, err := eventTime(r.Form.Get("since")) if err != nil { return err } until, err := eventTime(r.Form.Get("until")) if err != nil { return err } var ( timeout <-chan time.Time onlyPastEvents bool ) if !until.IsZero() { if until.Before(since) { return invalidRequestError{fmt.Errorf("`since` time (%s) cannot be after `until` time (%s)", r.Form.Get("since"), r.Form.Get("until"))} } now := time.Now() onlyPastEvents = until.Before(now) if !onlyPastEvents { dur := until.Sub(now) timer := time.NewTimer(dur) defer timer.Stop() timeout = timer.C } } ef, err := filters.FromJSON(r.Form.Get("filters")) if err != nil { return err } w.Header().Set("Content-Type", "application/json") output := ioutils.NewWriteFlusher(w) defer output.Close() output.Flush() enc := json.NewEncoder(output) buffered, l := s.backend.SubscribeToEvents(since, until, ef) defer s.backend.UnsubscribeFromEvents(l) for _, ev := range buffered { if err := enc.Encode(ev); err != nil { return err } } if onlyPastEvents { return nil } for { select { case ev := <-l: jev, ok := ev.(events.Message) if !ok { logrus.Warnf("unexpected event message: %q", ev) continue } if err := enc.Encode(jev); err != nil { return err } case <-timeout: return nil case <-ctx.Done(): logrus.Debug("Client context cancelled, stop sending events") return nil } } } func (s *systemRouter) postAuth(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { var config *types.AuthConfig err := json.NewDecoder(r.Body).Decode(&config) r.Body.Close() if err != nil { return err } status, token, err := s.backend.AuthenticateToRegistry(ctx, config) if err != nil { return err } return httputils.WriteJSON(w, http.StatusOK, &registry.AuthenticateOKBody{ Status: status, IdentityToken: token, }) } func eventTime(formTime string) (time.Time, error) { t, tNano, err := timetypes.ParseTimestamps(formTime, -1) if err != nil { return time.Time{}, err } if t == -1 { return time.Time{}, nil } return time.Unix(t, tNano), nil }
package system // import "github.com/docker/docker/api/server/router/system" import ( "context" "encoding/json" "fmt" "net/http" "time" "github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/router/build" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/registry" timetypes "github.com/docker/docker/api/types/time" "github.com/docker/docker/api/types/versions" "github.com/docker/docker/pkg/ioutils" "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/errgroup" ) func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.WriteHeader(http.StatusOK) return nil } func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate") w.Header().Add("Pragma", "no-cache") builderVersion := build.BuilderVersion(*s.features) if bv := builderVersion; bv != "" { w.Header().Set("Builder-Version", string(bv)) } if r.Method == http.MethodHead { w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Header().Set("Content-Length", "0") return nil } _, err := w.Write([]byte{'O', 'K'}) return err } func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemInfo() if s.cluster != nil { info.Swarm = s.cluster.Info() info.Warnings = append(info.Warnings, info.Swarm.Warnings...) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") { // TODO: handle this conversion in engine-api type oldInfo struct { *types.Info ExecutionDriver string } old := &oldInfo{ Info: info, ExecutionDriver: "<not supported>", } nameOnlySecurityOptions := []string{} kvSecOpts, err := types.DecodeSecurityOptions(old.SecurityOptions) if err != nil { return err } for _, s := range kvSecOpts { nameOnlySecurityOptions = append(nameOnlySecurityOptions, s.Name) } old.SecurityOptions = nameOnlySecurityOptions return httputils.WriteJSON(w, http.StatusOK, old) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") { if info.KernelVersion == "" { info.KernelVersion = "<unknown>" } if info.OperatingSystem == "" { info.OperatingSystem = "<unknown>" } } return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemVersion() return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } var getContainers, getImages, getVolumes, getBuildCache bool if typeStrs, ok := r.Form["type"]; !ok { getContainers, getImages, getVolumes, getBuildCache = true, true, true, true } else { for _, typ := range typeStrs { switch types.DiskUsageObject(typ) { case types.ContainerObject: getContainers = true case types.ImageObject: getImages = true case types.VolumeObject: getVolumes = true case types.BuildCacheObject: getBuildCache = true default: return invalidRequestError{Err: fmt.Errorf("unknown object type: %s", typ)} } } } eg, ctx := errgroup.WithContext(ctx) var systemDiskUsage *types.DiskUsage if getContainers || getImages || getVolumes { eg.Go(func() error { var err error systemDiskUsage, err = s.backend.SystemDiskUsage(ctx, DiskUsageOptions{ Containers: getContainers, Images: getImages, Volumes: getVolumes, }) return err }) } var buildCache []*types.BuildCache if getBuildCache { eg.Go(func() error { var err error buildCache, err = s.builder.DiskUsage(ctx) if err != nil { return errors.Wrap(err, "error getting build cache usage") } if buildCache == nil { // Ensure empty `BuildCache` field is represented as empty JSON array(`[]`) // instead of `null` to be consistent with `Images`, `Containers` etc. buildCache = []*types.BuildCache{} } return nil }) } if err := eg.Wait(); err != nil { return err } var builderSize int64 if versions.LessThan(httputils.VersionFromContext(ctx), "1.42") { for _, b := range buildCache { builderSize += b.Size } } du := types.DiskUsage{ BuildCache: buildCache, BuilderSize: builderSize, } if systemDiskUsage != nil { du.LayersSize = systemDiskUsage.LayersSize du.Images = systemDiskUsage.Images du.Containers = systemDiskUsage.Containers du.Volumes = systemDiskUsage.Volumes } return httputils.WriteJSON(w, http.StatusOK, du) } type invalidRequestError struct { Err error } func (e invalidRequestError) Error() string { return e.Err.Error() } func (e invalidRequestError) InvalidParameter() {} func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } since, err := eventTime(r.Form.Get("since")) if err != nil { return err } until, err := eventTime(r.Form.Get("until")) if err != nil { return err } var ( timeout <-chan time.Time onlyPastEvents bool ) if !until.IsZero() { if until.Before(since) { return invalidRequestError{fmt.Errorf("`since` time (%s) cannot be after `until` time (%s)", r.Form.Get("since"), r.Form.Get("until"))} } now := time.Now() onlyPastEvents = until.Before(now) if !onlyPastEvents { dur := until.Sub(now) timer := time.NewTimer(dur) defer timer.Stop() timeout = timer.C } } ef, err := filters.FromJSON(r.Form.Get("filters")) if err != nil { return err } w.Header().Set("Content-Type", "application/json") output := ioutils.NewWriteFlusher(w) defer output.Close() output.Flush() enc := json.NewEncoder(output) buffered, l := s.backend.SubscribeToEvents(since, until, ef) defer s.backend.UnsubscribeFromEvents(l) for _, ev := range buffered { if err := enc.Encode(ev); err != nil { return err } } if onlyPastEvents { return nil } for { select { case ev := <-l: jev, ok := ev.(events.Message) if !ok { logrus.Warnf("unexpected event message: %q", ev) continue } if err := enc.Encode(jev); err != nil { return err } case <-timeout: return nil case <-ctx.Done(): logrus.Debug("Client context cancelled, stop sending events") return nil } } } func (s *systemRouter) postAuth(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { var config *types.AuthConfig err := json.NewDecoder(r.Body).Decode(&config) r.Body.Close() if err != nil { return err } status, token, err := s.backend.AuthenticateToRegistry(ctx, config) if err != nil { return err } return httputils.WriteJSON(w, http.StatusOK, &registry.AuthenticateOKBody{ Status: status, IdentityToken: token, }) } func eventTime(formTime string) (time.Time, error) { t, tNano, err := timetypes.ParseTimestamps(formTime, -1) if err != nil { return time.Time{}, err } if t == -1 { return time.Time{}, nil } return time.Unix(t, tNano), nil }
rvolosatovs
bca9f608454916a63351cfea2689e18642860e79
656a5e2bdf8cc2be64fee1459821b7045b755b0d
Note: a wrapper was missing here, added
rvolosatovs
4,596
moby/moby
42,559
Add `type` parameter to `/system/df`
**- What I did** Add `type` parameter to `/system/df` **- How I did it** Internally allow more fine-grained control over choice of objects to compute disk usage from, add `types` URL parameter. **- How to verify it** ```sh $ docker run alpine # or create a container in any other way you prefer $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?types=image&type=container' | jq # should see image and container data $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?type=container&type=invalid' | jq # should see an error: { "message": "unknown object type: invalid" } $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df?type=container' | jq # should see only container data $ curl -s --unix-socket /var/run/docker.sock 'http://localhost/system/df' | jq # should see data, just like before ``` **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> `/system/df` endpoint now supports `type` URL parameter, which gives control over object types to compute disk usage of
null
2021-06-23 13:34:39+00:00
2021-08-02 19:03:06+00:00
api/server/router/system/system_routes.go
package system // import "github.com/docker/docker/api/server/router/system" import ( "context" "encoding/json" "fmt" "net/http" "time" "github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/router/build" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/registry" timetypes "github.com/docker/docker/api/types/time" "github.com/docker/docker/api/types/versions" "github.com/docker/docker/pkg/ioutils" pkgerrors "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/errgroup" ) func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.WriteHeader(http.StatusOK) return nil } func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate") w.Header().Add("Pragma", "no-cache") builderVersion := build.BuilderVersion(*s.features) if bv := builderVersion; bv != "" { w.Header().Set("Builder-Version", string(bv)) } if r.Method == http.MethodHead { w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Header().Set("Content-Length", "0") return nil } _, err := w.Write([]byte{'O', 'K'}) return err } func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemInfo() if s.cluster != nil { info.Swarm = s.cluster.Info() info.Warnings = append(info.Warnings, info.Swarm.Warnings...) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") { // TODO: handle this conversion in engine-api type oldInfo struct { *types.Info ExecutionDriver string } old := &oldInfo{ Info: info, ExecutionDriver: "<not supported>", } nameOnlySecurityOptions := []string{} kvSecOpts, err := types.DecodeSecurityOptions(old.SecurityOptions) if err != nil { return err } for _, s := range kvSecOpts { nameOnlySecurityOptions = append(nameOnlySecurityOptions, s.Name) } old.SecurityOptions = nameOnlySecurityOptions return httputils.WriteJSON(w, http.StatusOK, old) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") { if info.KernelVersion == "" { info.KernelVersion = "<unknown>" } if info.OperatingSystem == "" { info.OperatingSystem = "<unknown>" } } return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemVersion() return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { eg, ctx := errgroup.WithContext(ctx) var du *types.DiskUsage eg.Go(func() error { var err error du, err = s.backend.SystemDiskUsage(ctx) return err }) var buildCache []*types.BuildCache eg.Go(func() error { var err error buildCache, err = s.builder.DiskUsage(ctx) if err != nil { return pkgerrors.Wrap(err, "error getting build cache usage") } return nil }) if err := eg.Wait(); err != nil { return err } if versions.LessThan(httputils.VersionFromContext(ctx), "1.42") { var builderSize int64 for _, b := range buildCache { builderSize += b.Size } du.BuilderSize = builderSize } du.BuildCache = buildCache if buildCache == nil { // Ensure empty `BuildCache` field is represented as empty JSON array(`[]`) // instead of `null` to be consistent with `Images`, `Containers` etc. du.BuildCache = []*types.BuildCache{} } return httputils.WriteJSON(w, http.StatusOK, du) } type invalidRequestError struct { Err error } func (e invalidRequestError) Error() string { return e.Err.Error() } func (e invalidRequestError) InvalidParameter() {} func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } since, err := eventTime(r.Form.Get("since")) if err != nil { return err } until, err := eventTime(r.Form.Get("until")) if err != nil { return err } var ( timeout <-chan time.Time onlyPastEvents bool ) if !until.IsZero() { if until.Before(since) { return invalidRequestError{fmt.Errorf("`since` time (%s) cannot be after `until` time (%s)", r.Form.Get("since"), r.Form.Get("until"))} } now := time.Now() onlyPastEvents = until.Before(now) if !onlyPastEvents { dur := until.Sub(now) timer := time.NewTimer(dur) defer timer.Stop() timeout = timer.C } } ef, err := filters.FromJSON(r.Form.Get("filters")) if err != nil { return err } w.Header().Set("Content-Type", "application/json") output := ioutils.NewWriteFlusher(w) defer output.Close() output.Flush() enc := json.NewEncoder(output) buffered, l := s.backend.SubscribeToEvents(since, until, ef) defer s.backend.UnsubscribeFromEvents(l) for _, ev := range buffered { if err := enc.Encode(ev); err != nil { return err } } if onlyPastEvents { return nil } for { select { case ev := <-l: jev, ok := ev.(events.Message) if !ok { logrus.Warnf("unexpected event message: %q", ev) continue } if err := enc.Encode(jev); err != nil { return err } case <-timeout: return nil case <-ctx.Done(): logrus.Debug("Client context cancelled, stop sending events") return nil } } } func (s *systemRouter) postAuth(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { var config *types.AuthConfig err := json.NewDecoder(r.Body).Decode(&config) r.Body.Close() if err != nil { return err } status, token, err := s.backend.AuthenticateToRegistry(ctx, config) if err != nil { return err } return httputils.WriteJSON(w, http.StatusOK, &registry.AuthenticateOKBody{ Status: status, IdentityToken: token, }) } func eventTime(formTime string) (time.Time, error) { t, tNano, err := timetypes.ParseTimestamps(formTime, -1) if err != nil { return time.Time{}, err } if t == -1 { return time.Time{}, nil } return time.Unix(t, tNano), nil }
package system // import "github.com/docker/docker/api/server/router/system" import ( "context" "encoding/json" "fmt" "net/http" "time" "github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/router/build" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/registry" timetypes "github.com/docker/docker/api/types/time" "github.com/docker/docker/api/types/versions" "github.com/docker/docker/pkg/ioutils" "github.com/pkg/errors" "github.com/sirupsen/logrus" "golang.org/x/sync/errgroup" ) func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.WriteHeader(http.StatusOK) return nil } func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate") w.Header().Add("Pragma", "no-cache") builderVersion := build.BuilderVersion(*s.features) if bv := builderVersion; bv != "" { w.Header().Set("Builder-Version", string(bv)) } if r.Method == http.MethodHead { w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Header().Set("Content-Length", "0") return nil } _, err := w.Write([]byte{'O', 'K'}) return err } func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemInfo() if s.cluster != nil { info.Swarm = s.cluster.Info() info.Warnings = append(info.Warnings, info.Swarm.Warnings...) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") { // TODO: handle this conversion in engine-api type oldInfo struct { *types.Info ExecutionDriver string } old := &oldInfo{ Info: info, ExecutionDriver: "<not supported>", } nameOnlySecurityOptions := []string{} kvSecOpts, err := types.DecodeSecurityOptions(old.SecurityOptions) if err != nil { return err } for _, s := range kvSecOpts { nameOnlySecurityOptions = append(nameOnlySecurityOptions, s.Name) } old.SecurityOptions = nameOnlySecurityOptions return httputils.WriteJSON(w, http.StatusOK, old) } if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") { if info.KernelVersion == "" { info.KernelVersion = "<unknown>" } if info.OperatingSystem == "" { info.OperatingSystem = "<unknown>" } } return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { info := s.backend.SystemVersion() return httputils.WriteJSON(w, http.StatusOK, info) } func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } var getContainers, getImages, getVolumes, getBuildCache bool if typeStrs, ok := r.Form["type"]; !ok { getContainers, getImages, getVolumes, getBuildCache = true, true, true, true } else { for _, typ := range typeStrs { switch types.DiskUsageObject(typ) { case types.ContainerObject: getContainers = true case types.ImageObject: getImages = true case types.VolumeObject: getVolumes = true case types.BuildCacheObject: getBuildCache = true default: return invalidRequestError{Err: fmt.Errorf("unknown object type: %s", typ)} } } } eg, ctx := errgroup.WithContext(ctx) var systemDiskUsage *types.DiskUsage if getContainers || getImages || getVolumes { eg.Go(func() error { var err error systemDiskUsage, err = s.backend.SystemDiskUsage(ctx, DiskUsageOptions{ Containers: getContainers, Images: getImages, Volumes: getVolumes, }) return err }) } var buildCache []*types.BuildCache if getBuildCache { eg.Go(func() error { var err error buildCache, err = s.builder.DiskUsage(ctx) if err != nil { return errors.Wrap(err, "error getting build cache usage") } if buildCache == nil { // Ensure empty `BuildCache` field is represented as empty JSON array(`[]`) // instead of `null` to be consistent with `Images`, `Containers` etc. buildCache = []*types.BuildCache{} } return nil }) } if err := eg.Wait(); err != nil { return err } var builderSize int64 if versions.LessThan(httputils.VersionFromContext(ctx), "1.42") { for _, b := range buildCache { builderSize += b.Size } } du := types.DiskUsage{ BuildCache: buildCache, BuilderSize: builderSize, } if systemDiskUsage != nil { du.LayersSize = systemDiskUsage.LayersSize du.Images = systemDiskUsage.Images du.Containers = systemDiskUsage.Containers du.Volumes = systemDiskUsage.Volumes } return httputils.WriteJSON(w, http.StatusOK, du) } type invalidRequestError struct { Err error } func (e invalidRequestError) Error() string { return e.Err.Error() } func (e invalidRequestError) InvalidParameter() {} func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { if err := httputils.ParseForm(r); err != nil { return err } since, err := eventTime(r.Form.Get("since")) if err != nil { return err } until, err := eventTime(r.Form.Get("until")) if err != nil { return err } var ( timeout <-chan time.Time onlyPastEvents bool ) if !until.IsZero() { if until.Before(since) { return invalidRequestError{fmt.Errorf("`since` time (%s) cannot be after `until` time (%s)", r.Form.Get("since"), r.Form.Get("until"))} } now := time.Now() onlyPastEvents = until.Before(now) if !onlyPastEvents { dur := until.Sub(now) timer := time.NewTimer(dur) defer timer.Stop() timeout = timer.C } } ef, err := filters.FromJSON(r.Form.Get("filters")) if err != nil { return err } w.Header().Set("Content-Type", "application/json") output := ioutils.NewWriteFlusher(w) defer output.Close() output.Flush() enc := json.NewEncoder(output) buffered, l := s.backend.SubscribeToEvents(since, until, ef) defer s.backend.UnsubscribeFromEvents(l) for _, ev := range buffered { if err := enc.Encode(ev); err != nil { return err } } if onlyPastEvents { return nil } for { select { case ev := <-l: jev, ok := ev.(events.Message) if !ok { logrus.Warnf("unexpected event message: %q", ev) continue } if err := enc.Encode(jev); err != nil { return err } case <-timeout: return nil case <-ctx.Done(): logrus.Debug("Client context cancelled, stop sending events") return nil } } } func (s *systemRouter) postAuth(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { var config *types.AuthConfig err := json.NewDecoder(r.Body).Decode(&config) r.Body.Close() if err != nil { return err } status, token, err := s.backend.AuthenticateToRegistry(ctx, config) if err != nil { return err } return httputils.WriteJSON(w, http.StatusOK, &registry.AuthenticateOKBody{ Status: status, IdentityToken: token, }) } func eventTime(formTime string) (time.Time, error) { t, tNano, err := timetypes.ParseTimestamps(formTime, -1) if err != nil { return time.Time{}, err } if t == -1 { return time.Time{}, nil } return time.Unix(t, tNano), nil }
rvolosatovs
bca9f608454916a63351cfea2689e18642860e79
656a5e2bdf8cc2be64fee1459821b7045b755b0d
Perhaps (the "system" in this function's name / route is because it's a `docker system df` command, but perhaps "system disk" may cause some confusion). ```suggestion return pkgerrors.Wrap(err, "error getting disk usage") ``` That said, we should look if it might have been on purpose, and if the extra string adds sufficient value; there's some inconsistencies in `SystemDiskUsage` itself as well; some errors are not wrapped (but can return a "typed" error), other's are discarding underlying errors by creating a new error (without `Wrap` or `%w`); https://github.com/moby/moby/blob/7b9275c0da707b030e62c96b679a976f31f929d3/daemon/disk_usage.go#L15 https://github.com/moby/moby/blob/7b9275c0da707b030e62c96b679a976f31f929d3/daemon/disk_usage.go#L25 https://github.com/moby/moby/blob/7b9275c0da707b030e62c96b679a976f31f929d3/daemon/disk_usage.go#L31 So this would look (for the above) like: ``` error getting system disk usage: a disk usage operation is already running error getting system disk usage: failed to retrieve container list: <underlyihg error message> error getting system disk usage: failed to retrieve image list: <underlyihg error message> ``` The "failed to retrieve container list" and "failed to retrieve image list" cases may actually be wrong, as (looking at the "images" case, the underlying errors may be of a specific `errdef` type; https://github.com/moby/moby/blob/1ba54a5fd0ba293db3bea46cd67604b593f2048b/daemon/images/images.go#L53-L61 https://github.com/moby/moby/blob/1ba54a5fd0ba293db3bea46cd67604b593f2048b/api/types/filters/parse.go#L250-L256
thaJeztah
4,597